Pinniped blog
Pinniped v0.7.0: Enabling multi-cloud, multi-provider Kubernetes
Apr 1, 2021
Photo by Fred Heap on Unsplash
Pinniped is a “batteries included” authentication system for Kubernetes clusters. With the release of v0.7.0, Pinniped now supports a much wider range of real-world Kubernetes clusters, including managed Kubernetes environments on all major cloud providers.
This post describes how v0.7.0 fits into Pinniped’s quest to bring a smooth, unified login experience to all Kubernetes clusters.
Authentication in Kubernetes
Kubernetes includes a pluggable authentication system right out of the box. While it doesn’t have an end-to-end login flow for users, it does support many ways to authenticate individual requests. These include JSON Web Tokens (JWTs), x509 client certificates, and opaque bearer tokens validated by an external webhook.
As a cluster administrator, you can configure these options by passing the appropriate command-line flags to the kube-apiserver
process.
For example, to configure x509 client certificates, you must set the --client-ca-file
flag to reference an x509 certificate authority bundle.
If you are hand-crafting a Kubernetes installation or building a custom distribution, you can use these options to integrate Kubernetes into your existing identity infrastructure.
However, in many real-world scenarios your options are more limited:
If you run your clusters using managed services such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE), you won’t have access to set the required flags. These cloud providers don’t allow cluster administrators to set arbitrary API server command-line flags, so you must use their respective built-in identity systems.
Even if you build and install your own Kubernetes clusters, changing
kube-apiserver
flags requires reconfiguring and restarting the cluster control plane. This can be a daunting task if you have dozens or hundreds of disparate existing clusters spread across an enterprise.
Pinniped closes these gaps by enabling dynamic reconfiguration of Kubernetes authentication on existing clusters. This empowers cluster administrators to unify cluster login flows across all their clusters, even when they span multiple clouds and providers.
The Concierge
The Pinniped Concierge component implements cluster-level authentication. It runs on each Kubernetes cluster to enable Pinniped-based logins on that cluster. When a new user arrives, the Concierge server verifies the user’s external identity and helps them access the cluster.
The design of the Concierge supports multiple backend strategies. Each strategy helps Pinniped integrate with some class of Kubernetes clusters.
Concierge before v0.7.0
Although the Concierge design allows for multiple strategies, before v0.7.0 there was only one: KubeClusterSigningCertificate
.
When the Concierge starts, the KubeClusterSigningCertificate
strategy:
Looks for a
kube-controller-manager
pod in thekube-system
namespace. If it finds no such pod, it marks the strategy as failed.Creates a “kube cert agent” pod running in the Concierge namespace. This pod has all the same node selectors, tolerations, and host volume mounts as the original
kube-controller-manager
pod, but simply runs asleep
command.Uses the pod
exec
API to connect and runcat
. Using this technique, it reads both the cluster signing certificate (--cluster-signing-cert-file
) and key (--cluster-signing-key-file
) and loads them into an in-memory certificate signer in the main Concierge process.
Later, when a user runs kubectl
:
The
kubectl
process invokes the Pinniped ExecCredential plugin. The plugin code obtains the user’s external credential, then sends a TokenCredentialRequest to the cluster’s aggregated API server endpoint.The TokenCredentialRequest handler in the Concierge validates the user’s external credential. Once the it has authenticated the user, it uses the cluster signing certificate to issue and return a short-lived client certificate encoding the user’s identity. This certificate is valid for five minutes.
The plugin code passes the short-lived certificate back to
kubectl
, which makes its authenticated API requests to the Kubernetes API server using the temporary client certificate.
This strategy works on clusters where the kube-controller-manager
runs as a normal pod on a schedulable cluster node.
This includes many real-world clusters including those created by kubeadm.
It has little or no performance overhead because Pinniped isn’t directly in the request path. Because all the interactions between the client and the Concierge happen via Kubernetes API aggregation, it doesn’t require any additional ingress or external load balancer support. This also makes it great for simple use cases such as kind.
However, it comes with one big caveat: it doesn’t support any of the most popular managed Kubernetes services.
Adding support for managed clusters
On popular managed Kubernetes services, the Kubernetes control plane isn’t accessible to the usual cluster administrator.
This requires a new strategy: ImpersonationProxy
.
When the Concierge is starts, the ImpersonationProxy
strategy:
Looks for nodes labeled as control plane nodes. If it finds any, it puts itself in an inactive state as it’s not needed.
Starts serving an HTTPS endpoint on TCP port 8444. This endpoint serves as an impersonating proxy for the Kubernetes API (more details on this below).
Creates a Service of
type: LoadBalancer
and waits for the cloud provider to assign it an external hostname or IP address.Issues an x509 certificate authority and serving certificates for the external endpoint. Clients use this certificate authority to verify connections to the impersonation proxy.
Issues an x509 certificate authority for issuing client certificates. This client CA isn’t trusted by Kubernetes but is trusted by the impersonation proxy handler.
Later, when a user runs kubectl
:
As before, the
kubectl
process invokes the Pinniped ExecCredential plugin (part of thepinniped
command-line tool). The plugin code obtains the user’s external credential, then makes a TokenCredentialRequest. This request happens as an anonymous request to the impersonation proxy endpoint.The TokenCredentialRequest handler in the Concierge validates the user’s external credentials. Once it has authenticated the user, it uses the Pinniped client signing certificate to issue and return a short-lived (5m) client certificate encoding the user’s identity. This certificate is only valid when presented to the impersonation proxy, not when presented directly to the real Kubernetes API server.
The plugin code passes the short-lived certificate back to
kubectl
. Unlike before, the kubeconfig now points at the impersonation proxy endpoint.The impersonation proxy receives the incoming request from
kubectl
and authenticates it via the client certificate. Once it knows the user’s identity, it impersonates the authenticated user by addingImpersonate-
headers. It forwards the impersonating request to the real Kubernetes API server and proxies the response back to the user.
This strategy works on any conformant cluster with working LoadBalancer service support. It has some disadvantages, namely the overhead involved in proxying requests and the extra setup time required to provision a LoadBalancer service.
Conclusion and future work
Pinniped now supports a large majority of real-world Kubernetes clusters! Our automated test suite ensures that Pinniped is stable and functional across a wide range of Kubernetes versions and several providers including EKS, AKS, and GKE.
This is a great start but there are more strategies left to build:
A strategy that loads the cluster signing certificate/key directly from a Secret (for example, as it appears in OpenShift).
A strategy that takes advantage of future CertificateSigningRequest API enhancements that support short-lived certificates (see kubernetes/kubernetes#99494).
A strategy that issues non-certificate credentials, such as if a cluster has been statically configured to trust a JWT issuer.
The current implementation also has a few missing features:
There is no support for “nested” impersonation. This means you can’t use the
--as
or--as-group
flags inkubectl
when you’re connecting through the impersonation proxy.It only supports certificate-based authentication. You can’t authenticate to the impersonation proxy directly with a ServiceAccount token, for example.
Depending on your cloud provider’s LoadBalancer implementation, you may experience timeouts in long idle requests. For example, a
kubectl logs
command for a quiet app may exit after as few as four minutes of silence.
We invite your suggestions and contributions to make Pinniped work across all flavors of Kubernetes.
Join the Pinniped community!
Pinniped is better because of our contributors and maintainers. It's because of you that we can bring great software to the community.
Connect with the community on GitHub and Slack.
Join our Google Group to receive updates and meeting invitations.
Related content
Pinniped v0.10.0: Managing OIDC Login Flows in Browserless Environments
With the release of v0.10.0, Pinniped now supports Kubernetes clusters behind firewalls or in restricted environments
Pinniped v0.5.0: Now With Even More Pinnipeds
We encountered a problem that’s familiar to many Kubernetes controller developers: we need to support multiple instances of our controller on one cluster.
Pinniped v0.11.0: Easy Configurations for Active Directory, OIDC CLI workflows and more
With the release of v0.11.0, Pinniped offers CRDs for easy Active Directory configuration, OIDC password grant flow for CLI workflows, and Distroless images for security and performance