One of the most significant benefits of containers is that they empower a software engineer to explore technologies and infrastructure decisions quickly. Containers make it possible for a developer to try on new technologies and platforms and consider infrastructure decisions without a long approval and requisition process. It also reduces cost significantly, since many containers can run on a developer laptop.
Access control and management of Kubernetes introduces a potential roadblock to this agile aspect of container-centric development. As a result, I put a lot of thought into how to provide easy, fast, auditable, compliant, secure, measurable, manageable, self-service access by developers to kubernetes. My goal was to make it as effortless to provision resources in kubernetes as it was to create containers on a laptop. Below is the solution I designed
Authentication and Authorization
Kubernetes provides various authentication schemes. OIDC, which builds on OAuth, makes it possible to facilitate authentication through an existing identity provider, like Active Directory. This lends itself to a number of security best practices, like managing password lifecycle, employee termination resulting in revoked access and auditing of actions performed by specific users. It is possible to manage these with certs, tokens or passwords, but it’s much nicer to leverage an existing identity provider.
Kubernetes also facilitates Role Based Access Control (RBAC), builds on the Authentication mentioned above by adding authorization. For a truly secure cluster, segmentation is important. Namespaces provide a resource boundary in kubernetes, similar to Projects (formerly Tenants) in OpenStack, or an account in one of the major cloud providers. The first level of authorization should align with a namespace. Even within that namespace, not every user should necessarily have the same level of access to the same set of resources.
Roles, RoleBindings and Account Types
Kubernetes offers two scopes for Roles and RoleBindings: Cluster and Namespace. This solution doesn’t involve ClusterRole or ClusterRoleBinding. The following diagram illustrates the relationship between a Namespace, Roles and RoleBindings.
As you can see in the diagram above, a RoleBinding binds to a Role. The Role defines the relationship of associated subjects (users and accounts) to specific resources by way of apiGroups. Verbs further define what subjects can do with respect to the identified resources. Both Roles and RoleBindings belong to a namespace.
The Subjects in a RoleBinding can consist of Users and ServiceAccounts. A User is (or should be) mortal, and should have a one-to-one correlation to a single person. A ServiceAccount is more generic, and will typically be used to facilitate communication from a non-mortal consumer. An example of a ServiceAccount might be for Jenkins to create resources during a build process. Rather than provide Jenkins with an person’s (mortal) credentials, a ServiceAccount, with it’s own credential, is created and added to a RoleBinding which is bound to a Role designed to give only the specific access required by Jenkins to perform builds.
In the diagram I show Users as defined in Azure Active Directory, but this could be any identity provider that supports OIDC.
The Onboarding Portal
I then designed a portal that would make it possible for an engineer to authenticate and create a new Namespace. Below is a mockup of how the portal functions.
The same authentication provider can be used to allow access to the onboarding portal that is used to provide OIDC authentication in the kubernetes apiserver. Once logged in, the user can manage namespaces in various ways.
For each namespace there are one or more Owners and Viewers. These two designations are representative of a user’s access to those namespaces in the onboarding portal, NOT in kubernetes. In the portal, an Owner can view, manage and delete namespaces as well as generate the access modal giving kubectl access (see below). A Viewer can see the namespace and generate the access modal for kubectl access. Any logged in user can create a new namespace. There is currently no limit to how many namespaces can be created by a user.
Owner and Viewer information is stored as metadata with the namespace, as well as other portal functionality. This keeps kubernetes as the single source of truth and eliminates the need for the portal to have it’s own database. Backups of etcd will include all portal details, leaving the portal stateless. All communication between the portal and kubernetes occurs by way of the kubernetes REST API using a ServiceAccount with a corresponding ClusterRole and ClusterRoleBinding.
I defined four roles that are created for every new namespace: Admin, Secret Manager, Developer and Observer. An Admin can perform all verbs against all resources within the namespace. The Secret Manager can perform all verbs against Secret and ConfigMap resources in the namespace. The Developer can perform all verbs against all resources except Secret and ConfigMap in the namespace. Observer can view, list and watch all resources except Secret and ConfigMap in the namespace.
For each Role there is a single RoleBinding. As users are added to each Role in the portal, they are added to the list of Subjects in the corresponding RoleBinding. I also introduced a validation step that ensures the user exists in the chosen identity provider and that the username format is correct before adding him to the Subjects collection.
Notice that it’s possible for someone to be a viewer of a namespace and not be assigned to any Roles in that namespace. This could be corrected by ensuring that any viewer is also at least part of the Observer Role for that namespace.
Namespace Access using kubectl
The most common way to interact with a kubernetes cluster is by using the kubectl CLI. The Access modal in the portal provides commands that add User, Cluster and Context details to the kubectl config file (usually in ~/.kube/config). This is similar to the gcloud command to get credentials.
Many enterprises have a Configuration Management Database (CMDB) that captures details about applications, their criticality, their ownership, where they are deployed, how they are connected, etc. The portal can be made to query CMDB for the currently logged in user and retrieve a list of applications for which he has some role. He can then create namespaces only for those applications and the portal would create a new Configuration Item (CI) in the CMDB for the namespace. This would ensure that at an organizational level, all resources are appropriately tracked, monitored and that a contact person could be identified if something needed attention.
Managed kubernetes (GKE, AKS and EKS)
One key aspect of this solution is the ability to configure the apiserver to authenticate user requests using OIDC. At present, our kubernetes clusters are hosted in our datacenter. We are interested in the managed kubernetes offerings from the top three cloud providers, but none of them accommodate OIDC authentication with an external provider. Instead, they all support their own IAM tool as the primary authentication mechanism.
Some ways to work around this might include identity federation, as provided by AWS, or Google Cloud Sync plus AD Password Sync. All of these solutions involve finding a way to keep using IAM for authentication rather than supporting OIDC authentication directly. They just sync details from an external identity provider into the IAM solution. I submitted a feature request for GKE, so we’ll see if they pick up the idea of external authentication providers in their managed service.
For now, I haven’t deployed this solution on a managed kubernetes cluster.
The portal makes it possible to give engineers and developers immediate, self-service access to kubernetes resources, while ensuring security and sensible access controls. Default Roles are provided for each new namespace, and namespace Admins can create additional Roles and RoleBindings, as well as ServiceAccounts, for any other access needs. When a namespace is no longer needed, it can be deleted through the portal, along with all resources under it. Any identity provider that supports OIDC can be used to authenticate against the apiserver and the portal.