Kubernetes Tip: NetworkPolicy as a Practical Default-Deny
A step-by-step approach to introducing NetworkPolicy without breaking everything on day one.
NetworkPolicy is one of the best tools to reduce blast radius, but it can be painful if you flip “default deny” too early.
Prerequisite
NetworkPolicy enforcement depends on your CNI:
- Some CNIs enforce it by default
- Some require explicit enablement
Verify in your environment before relying on policies.
Step 1: Start with a namespace-level default deny (ingress)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: app
spec:
podSelector: {}
policyTypes:
- Ingress
This blocks incoming traffic to Pods in app unless allowed.
Step 2: Allow from your ingress controller
Example: allow traffic from ingress-nginx namespace to Pods labeled app=api.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-ingress
namespace: app
spec:
podSelector:
matchLabels:
app: api
policyTypes: [Ingress]
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- protocol: TCP
port: 8080
Step 3: Add egress control carefully
Egress is where things break unexpectedly (DNS, metrics, tracing, external APIs).
Start by explicitly allowing DNS:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: app
spec:
podSelector: {}
policyTypes: [Egress]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Debugging tips
When something breaks:
kubectl get netpol -n app
kubectl describe netpol -n app <name>
kubectl exec -n app -it <pod> -- sh
Try connectivity tests from inside the Pod (or with an ephemeral container).
Checklist
- Confirm your CNI enforces NetworkPolicy
- Default deny ingress per namespace
- Allow only the paths you need
- Treat egress as a separate rollout (DNS first)
Understand the default behavior: policies are allow-lists
NetworkPolicy is not a firewall rule engine that “adds blocks on top of allows”. It behaves more like this:
- If no policy selects a Pod for a given direction (Ingress/Egress), that direction is allowed by default.
- Once a Pod is selected by a policy of a given type, traffic is denied by default and only traffic explicitly allowed by policies is permitted.
This is why “default deny” is typically implemented as:
- a policy with
podSelector: {}andpolicyTypes: [Ingress](or[Egress])
It selects all Pods in the namespace and flips the default.
Ingress design: start with “who can talk to me”
A safe rollout sequence:
- Default deny ingress
- Allow ingress from known entry points (ingress controller / gateway)
- Allow intra-namespace traffic if needed (service-to-service)
- Add tighter allow lists per workload over time
Allow traffic within the namespace (common microservice need)
If services talk to each other inside the same namespace, add:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: app
spec:
podSelector: {}
policyTypes: [Ingress]
ingress:
- from:
- podSelector: {}
Then tighten with labels later (for example, only allow app=frontend to reach app=api).
Egress design: treat it as a migration project
Egress is where you break things you didn’t know you depended on:
- DNS
- metrics / tracing exporters
- time sync, certificate fetching, external APIs
- cloud metadata endpoints (in some environments)
A practical approach:
- Turn on egress default deny for a single namespace (or one workload).
- Allow DNS (CoreDNS) explicitly.
- Add specific egress rules for known dependencies.
- Observe failures and iterate.
Better DNS allow rule (target CoreDNS pods)
Allowing egress to the whole kube-system namespace is broader than necessary. A tighter approach is to allow egress to CoreDNS pods by label.
The label differs across clusters; a common one is k8s-app=kube-dns:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: app
spec:
podSelector: {}
policyTypes: [Egress]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
External egress: start with allowlists
If your services need to call external APIs, you can allow by CIDR using ipBlock:
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
Notes:
ipBlockis IP-based, not DNS-based. Many public APIs change IPs, so consider whether you need a stable egress proxy or NAT with fixed ranges.- Some CNIs have limitations around
ipBlockand certain traffic patterns.
Testing and debugging (what to do when traffic is blocked)
When a request fails after applying policies:
- Confirm which policies select the Pod:
kubectl get netpol -n app
kubectl describe netpol -n app <name>
- Test connectivity from inside the Pod network namespace:
- if you have tools:
curl,wget,nc,dig - if not: use an ephemeral container to run tooling safely
- Check whether your CNI provides policy logs/flow logs.
Flow logs can turn “it’s blocked” into “it’s blocked because rule X doesn’t match label Y”.
Operational checklist (day-2)
- Keep label taxonomy stable (policies depend on labels).
- Roll out policies progressively (one namespace/workload at a time).
- Maintain a documented allow-list for common platform dependencies (DNS, metrics, tracing).
- Validate policies in staging with realistic traffic patterns.
NetworkPolicy is one of the highest ROI security controls in Kubernetes, but only if you introduce it in a way your teams can operate confidently.
Common “platform allow” policies you’ll likely need
In real clusters, workloads often need to talk to shared platform components. A few examples (vary by environment):
- monitoring scrapers (Prometheus) scraping
/metrics - service meshes or gateways (control planes)
- logging agents or collectors (if they run as services)
Rather than punching holes ad-hoc, define a small set of vetted policies per namespace:
- allow ingress from ingress/gateway namespaces to selected apps
- allow ingress from monitoring namespace to metrics ports
- allow egress to DNS (CoreDNS)
Then keep workload-specific policies separate. This makes review and incident response much easier.
Label strategy: your policies are only as good as your labels
NetworkPolicy selectors are label-based, so treat labels as API:
- avoid constantly changing labels used by security policies
- standardize label keys (
app,component,tier, etc.) - apply namespace labels consistently (so namespaceSelector rules keep working)
If you rely on:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
make sure those labels exist in your cluster (most modern Kubernetes versions apply them automatically).
“Why is it still allowed?”: multiple policies can combine
Remember that multiple policies can select the same Pod. The effective allowed traffic is the union of all allowed rules for that direction.
This is useful (layered policies), but it can also confuse debugging:
- an “allow all from namespace X” policy can override your intended restriction
When troubleshooting, always list all policies that select the Pod.
Final advice
Start small, measure impact, and iterate. The goal is not perfect isolation on day one—the goal is steadily reducing unnecessary connectivity while keeping the system operable.
References
FAQ
Q: Why is my policy ignored? A: NetworkPolicies require a compatible CNI plugin. If the cluster has no provider, policies are inert.
Q: How do I allow DNS? A: Add egress to the kube-dns/CoreDNS service (TCP/UDP 53) in the namespace.
Q: Does every Pod get isolated? A: Only Pods selected by at least one policy are isolated; others remain default-allow.