Skip to content

2026

Securing kube-bind with Keycloak: A Production-Ready OIDC Setup

In this tutorial, I'll be showing you how to integrate Keycloak into kube-bind so that authentication is handled by an external identity provider instead of the embedded mock one.

If you've been following along from the previous posts, you know that kube-bind lets you project APIs from a provider cluster into a consumer cluster. To do that securely, it uses OIDC for authentication. In the quickstart guide, we used the embedded OIDC provider — which is great for tinkering locally, but absolutely not something you'd ship to production.

In production, you want a proper identity provider. One that manages users, groups, tokens, and sessions correctly. For this, we'll be making use of Keycloak.

Deep Dive: Understanding kube-bind Internals

Note: Just want to get started quickly? Check out our Quick Start Guide which sets up everything automatically with kubectl bind dev create. This article explains the machinery under the hood—the "Hard Way".

If you’ve ever tried to make one cluster consume resources from another, you’ve probably had to deal with complicated networking setups, VPN tunnels, duplicated Custom Resource Definitions (CRDs), or custom-built controllers to keep everything in sync.

As organizations grow, the need for multi-cluster setups become inevitable, and this comes with its own headache especially when you need to share services or resource between clusters. Doing this in Kubernetes is inherently hard because clusters are isolated by design. They don’t natively “talk” to each other.

This is where kube-bind comes in.

Getting Started with kube-bind: The Quick Way

kube-bind facilitates service sharing between Kubernetes clusters. It allows a Service Provider cluster to export APIs and a Consumer cluster to bind to them, projecting the resources into the consumer's cluster. This enables seamless cross-cluster consumption without complex networking or federation.

Here is what we will build:

graph TB
    User((User))

    subgraph Local["Local Computer"]
        CLI["kubectl bind CLI"]
    end

    subgraph Provider["Provider Cluster"]
        Backend["kube-bind Backend"]
        ProviderAPI["MangoDB API/CRD"]
        Backend --> ProviderAPI
    end

    subgraph Consumer["Consumer Cluster"]
        Konnector["Konnector Agent"]
        BoundAPI["MangoDB CRD<br/>Synced Copy"]
        Konnector --> BoundAPI
    end

    User -->|"1. kubectl bind dev create"| CLI
    CLI -.->|"Creates & Installs"| Backend
    CLI -.->|"Creates Cluster"| Consumer

    User -->|"2. kubectl bind login"| Backend
    Backend -->|"Auth Token"| User

    User -->|"3. kubectl bind create<br/>Select API in UI"| CLI
    CLI -->|"Install Konnector"| Konnector

    Konnector <-->|"4. Syncs Resources"| Backend

    User -.->|"5. kubectl apply/get"| BoundAPI
    BoundAPI -.->|"Synced to"| ProviderAPI