The Azure AKS Default Setting That’s Leaving Your Cluster Exposed — And 95% of Teams Never Touch It

 

Press enter or click to view image in full size

Wait… this is enabled by default?

There’s a setting inside Azure Kubernetes Service (AKS) that sounds harmless. You probably glossed over it in the docs. Maybe it was checked by default in the Azure portal. Maybe your Terraform module shipped it that way. It’s quietly exposing your cluster to internal privilege escalation and potential data exfiltration. Let’s talk about the one AKS configuration you need to fix today.

The Setting: enableRBAC = true — But That’s Not the Problem

Let’s be clear — this isn’t about not enabling RBAC. It is good and necessary.

The problem is that AKS creates a cluster-admin binding by default when you deploy a new cluster through the portal or ARM templates — unless you specifically override it.

What does that mean?

It means anyone who has access to the AKS cluster’s credentials can act as a full cluster admin — no questions asked. If you use Azure AD integration (which sounds secure), AKS will bind your AAD group to cluster-admin by default, unless you manually define your roles and permissions.

Scenario: You onboard a new dev to your team.

They get kubectl access via Azure AD. You tell them to “just deploy to the dev namespace.” But thanks to the default cluster-admin role binding…

  • They can delete any namespace.
  • They can exec into sensitive pods.
  • They can deploy privilege-escalation containers.
  • They can install CRDs that hijack admission control.

And this isn’t just hypothetical — this exact path has been used in internal lateral movement during red-team engagements in real Azure environments.

The Worst Part? Azure Doesn’t Warn You

No pop-up, CLI flag, or warning in the portal. It’s just there. Like a loaded gun on the kitchen table. Next to your YAML and CI/CD pipeline.

How to Check If You’re Vulnerable

Run this:

kubectl get clusterrolebinding | grep cluster-admin

If you see something like

aks-cluster-admin-binding   ClusterRole/cluster-admin   Group/aad:MyTeamGroup

or

cluster-admin-binding       ClusterRole/cluster-admin   User:sajjad@example.com

That’s your red flag. Anyone in that group or user role has unrestricted access to the cluster.

How to Fix It

1. Delete the Default Binding (Carefully)

If you know what you’re doing and have a fallback:

kubectl delete clusterrolebinding aks-cluster-admin-binding

Make sure you have an admin identity mapped in Azure AD before deleting this binding. You could lock yourself out.

2. Create Least-Privilege Role Bindings Instead

For example, to give your devs view-only access to their namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-readonly
namespace: dev
subjects:
- kind: Group
name: "aad:dev-team"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: view
apiGroup: rbac.authorization.k8s.io

3. Use Azure AD with Custom Role Mappings

Define your groups and what they can do — don’t let AKS assign cluster-admin by default.

az aks update \
--resource-group my-rg \
--name my-aks \
--enable-aad \
--aad-admin-group-object-ids <GUIDs>

And then limit that group to what’s necessary, not full cluster admin.

Why This Happens: Convenience Over Security

This “default” exists because Microsoft wants AKS to be easy to onboard. But convenience and security are almost always at odds. What’s worse: most devs and ops teams don’t even know the default exists — until they run a security audit and find that half the team has cluster-admin access… in prod.

No comments:

Post a Comment

Create a US Apple ID in 10 Minutes — No VPN, No Credit Card (2025 Guide)

  Want to Download US-Only Apps? Here’s the Easiest Way to Get a US Apple ID (Updated Dec 2025) Let’s talk about a very common headache. You...