Kubernetes Tip: RBAC Least Privilege (Practical, Not Painful)
A production-friendly approach to ServiceAccounts, Roles, and bindings that minimizes blast radius without breaking workflows.
RBAC is one of the highest leverage controls in Kubernetes. Done well, it prevents small mistakes from becoming cluster-wide incidents. Done poorly, it becomes “just give it cluster-admin” and nobody learns anything.
This tip focuses on practical least privilege: start with good defaults, make escalation intentional, and keep the system operable.
Mental model (the 3 building blocks)
- Subject: who is making the request (user, group, ServiceAccount).
- Role / ClusterRole: what actions are allowed (verbs on resources).
- RoleBinding / ClusterRoleBinding: attaches a role to a subject.
If you remember only one thing:
A Role is just rules. A Binding is what makes it real.
Namespaced vs cluster-wide
- Role and RoleBinding are namespaced. They grant access within a namespace.
- ClusterRole and ClusterRoleBinding can grant access across the whole cluster.
Best practice:
- Prefer Role + RoleBinding for applications.
- Use ClusterRole sparingly, mostly for cluster-level controllers or platform operators.
ServiceAccounts: don’t use the default one
Every Pod runs as a ServiceAccount. If you don’t specify one, it uses the namespace’s default ServiceAccount.
Production guidance:
- Create a dedicated ServiceAccount per workload (or per app).
- Turn off token mounting unless the Pod actually needs to call the Kubernetes API.
Example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
namespace: app
automountServiceAccountToken: false
Then in your Deployment:
spec:
template:
spec:
serviceAccountName: api
automountServiceAccountToken: false
If the app needs to call the Kubernetes API (for leader election, watching resources, etc.), set automountServiceAccountToken: true explicitly and review RBAC carefully.
Roles: write the smallest useful rules
RBAC rules are a list of:
apiGroups(e.g."","apps","batch")resources(e.g.pods,configmaps,deployments)verbs(e.g.get,list,watch,create,update,patch,delete)
Start with read-only “support” access
For troubleshooting, many teams grant engineers read-only access to common resources in a namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ns-observer
namespace: app
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "services", "endpoints", "configmaps", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch"]
Note the subresource pods/log: if you can read Pods but not logs, you’ll still feel “blocked” during incidents.
Understand subresources (common RBAC gotcha)
Some actions require permission on a subresource, not the parent:
- logs:
pods/log - exec:
pods/exec - port-forward:
pods/portforward - status updates:
deployments/status,pods/status
If your tool or controller says “forbidden” even though you granted access to pods, check subresources.
Bindings: keep human access separate from workload access
Workloads (ServiceAccounts) and humans (users/groups) should not share bindings casually.
Patterns that work well:
- Workload SA: only the API permissions the code actually needs.
- Human operators: read-only by default; separate escalation role for write operations.
This makes audits meaningful: “the app can only do X” is very different from “Alice can do X”.
Debugging RBAC with kubectl auth can-i
This command is your best friend:
kubectl auth can-i get pods -n app
kubectl auth can-i get pods/log -n app
kubectl auth can-i create pods/exec -n app
You can also impersonate:
kubectl auth can-i get secrets -n app --as system:serviceaccount:app:api
If you use groups:
kubectl auth can-i get pods -n app --as alice --as-group devs
This is how you validate rules before shipping them.
Common least-privilege recipes
1) App that only reads ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-reader
namespace: app
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-reader
namespace: app
subjects:
- kind: ServiceAccount
name: api
namespace: app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-reader
2) App that needs leader election (leases)
Many controllers use coordination.k8s.io Leases:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-election
namespace: app
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
3) Job runner that creates Pods (high risk)
If an app can create Pods, it can often escalate impact (run cryptominers, mount secrets, etc.). Treat “create pods” as a high-privilege action and gate it heavily.
If you must allow it:
- scope to a namespace
- restrict Pod Security and admission policies
- consider using a dedicated controller with strict templates rather than arbitrary pod creation
Red flags (things to avoid)
cluster-adminfor application ServiceAccounts- broad wildcard roles (e.g.,
resources: ["*"],verbs: ["*"]) without a strong reason - granting
secretsread access to many workloads (secrets are the keys to the kingdom) - mixing “break-glass” access into day-to-day bindings
Make escalation intentional (break-glass)
In real operations, you will sometimes need elevated access. The key is to make it:
- time-bound (temporary)
- audited (ticket/incident link)
- scoped (namespace, specific action)
Many teams implement:
- a dedicated “break-glass” group
- approvals and session recording
Even simple process changes (and logs) are a big improvement over permanent cluster-admin.
Checklist
- Each workload uses a dedicated ServiceAccount
-
automountServiceAccountToken: falseunless the app needs the API - Prefer Role/RoleBinding over cluster-wide permissions
- Include required subresources (
pods/log,pods/exec, etc.) intentionally - Use
kubectl auth can-i+ impersonation to validate rules - Keep break-glass access separate, temporary, and audited
Advanced knobs that help in mature clusters
Restrict with resourceNames (surgical permissions)
RBAC can restrict some rules to specific objects:
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["api-config"]
verbs: ["get"]
This is useful when an app needs access to one ConfigMap but not every ConfigMap in the namespace.
Treat pods/exec and pods/portforward as privileged
Allowing exec or port-forward is effectively “shell access”. In many environments it should be:
- restricted to a small operator group
- audited
- disabled for general developer roles
If your organization needs “debug access”, separate it from general read-only access instead of bundling everything into one role.
Prefer built-in roles for human access (when appropriate)
Kubernetes includes some standard ClusterRoles (view, edit, admin). They’re not perfect for every organization, but they can be a reasonable starting point for human access, especially if you layer additional policies on top.
For application ServiceAccounts, it’s usually better to define explicit Roles so you know exactly what the code can do.
Final takeaway
RBAC is most effective when it becomes a habit:
- every new workload starts with a minimal ServiceAccount
- permissions are reviewed like code
- escalation is normal but controlled
That’s how you keep clusters safe without slowing teams down.
References
FAQ
Q: Role vs ClusterRole? A: Roles are namespace-scoped; ClusterRoles are cluster-wide and can grant access to non-namespaced resources.
Q: Why does a Pod have more permissions than expected? A: It may be using the default ServiceAccount or a broad ClusterRoleBinding. Verify the SA and bindings.
Q: How do I test permissions?
A: Use kubectl auth can-i with the same ServiceAccount.