Skip to main content

Add a Provider ⏱️ 17m

Your pair:
Running solo locally?

Same commands, same cluster. See Solo local setup (k3d).

7.1 Before you start ⏱️ 3m

So far, every resource you've composed has been a plain Kubernetes objectConfigMap, Deployment, Service. Crossplane core itself was the controller that applied them. You never installed a Provider, and you didn't need one for that path.

That's only half of Crossplane. The other half is Providers: packages that teach Crossplane to talk to external APIs — cloud SDKs, the Helm CLI, foreign Kubernetes clusters, anything with a Go SDK someone has wrapped. Each Provider ships its own custom kinds (managed resources) that Crossplane reconciles by calling the external API.

Why install a Provider when modules 4-6 didn't need one?

Three real cases native composition can't reach:

  • Anything outside Kubernetes' built-in API. Functions can emit native kinds (Deployments, Services, ConfigMaps), but they can't talk to AWS or GCP or render a Helm chart. Those need a Provider that wraps the external SDK.
  • Adopting an existing in-cluster resource you didn't create. Composition functions create resources from scratch. provider-kubernetes's Object MR with managementPolicies: [Observe] lets you take ownership of a resource someone else created, mirror its status, and only then start reconciling it.
  • Targeting a different cluster or account. A ProviderConfig can point at any kubeconfig or cloud credential, not just the one Crossplane is installed in. Multi-cluster and multi-account fleets ride on this.

provider-helm is the friendliest example to start with — its external "API" is helm install against the same cluster you're already on, so you can ignore the cross-cluster and cloud-credential angles for now and focus on the Provider/ProviderConfig/MR shape.

In this module you'll install provider-helm — a Provider whose external "API" is just helm install against the same cluster — and use it to install the podinfo demo chart through Crossplane.

The pieces — a quick recap

You've now seen most of the components a Crossplane platform is built from. Three new ones land in this module: Provider, ProviderConfig, and ClusterProviderConfig. The Scope column splits a subtlety the v1 docs glossed over: a defining object (CRD, XRD) is itself cluster-scoped, but the kind it defines may be namespaced or cluster-scoped depending on its spec.scope.

ComponentWhat it isWho creates itObject scopeScope of the kind it defines / produces
CRDKubernetes Custom Resource Definition. Extends the API server with a new kind.Crossplane (auto-generated from an XRD); the cluster operator for built-insClusterNamespaced or Cluster, set in the CRD's spec.scope
XRDComposite Resource Definition — your declaration of a new Crossplane API. Applying an XRD makes Crossplane generate the matching CRD.You (the platform author)ClusterNamespaced (v2 default), Cluster, or LegacyCluster (v1-compat with claims), set in spec.scope
XRComposite Resource — an instance of an XRD that triggers a Composition.Platform usersPer the XRD — Namespaced in v2 by default
CompositionThe recipe. "When this XR exists, produce these resources."You (the platform author)Cluster
Composition functionPluggable logic the Composition's pipeline runs to produce desired state (function-patch-and-transform, …).Function package author; you install itCluster (it's a Crossplane package, like a Provider)
MRManaged Resource — a Kubernetes representation of an external thing a Provider reconciles.Crossplane (composed) or you (directly)Per-Provider choice — most v2 providers ship namespaced variants in a *.m.crossplane.io API group alongside the legacy cluster-scoped kinds
Providernew this moduleA package teaching Crossplane to manage a class of external API (Helm, AWS, Kubernetes, …).You install from a registry; Crossplane runs the controllerCluster
ProviderConfignew this modulePer-Provider runtime config (credentials, target endpoint). Namespaced flavor (in *.m.crossplane.io for providers that have caught up to v2).YouNamespaced
ClusterProviderConfignew this modulePer-Provider runtime config — v2's cluster-scoped flavor, shared across namespaces.YouCluster

A Provider at runtime, in one picture

A Provider is a package whose install spawns a long-running Pod (thick arrows: Crossplane-managed work). The Pod watches MRs of its kinds — Release.helm.m.crossplane.io here — and translates each one into calls against an external API (for provider-helm that "API" is the Helm SDK pulling charts and applying them to the cluster's own apiserver).

You're about to: install a Provider, configure it (with both flavors of ProviderConfig), and apply one Managed Resource that installs a Helm chart.

7.2 Install provider-helm ⏱️ 4m

A Provider is just another Crossplane package. Apply the manifest, wait for it to go Healthy.

kubectl apply -f - <<'EOF'
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-helm
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-helm:v1.2.0
EOF

Wait for the package to download and the controller pod to come up (~30 seconds):

kubectl wait --for=condition=Healthy provider/provider-helm --timeout=180s

Expected output:

provider.pkg.crossplane.io/provider-helm condition met

v1.2.0 matters: it's the first stable release that ships the v2 namespaced ProviderConfig + Release kinds (the helm.m.crossplane.io group, with the .m. marking namespaced). v0.x versions only have the legacy cluster-scoped kinds; the rest of this module won't work on those.

Grant the provider permission to install charts

provider-helm is its own Pod with its own Kubernetes ServiceAccount; that SA's permissions are what limit which charts the provider can install and where. By default it has none. To install Helm charts into arbitrary namespaces, bind that SA to the built-in cluster-admin ClusterRole:

SA=$(kubectl get sa -n crossplane-system -o name | grep provider-helm | cut -d/ -f2)
kubectl create clusterrolebinding provider-helm-admin \
--clusterrole=cluster-admin \
--serviceaccount=crossplane-system:$SA
cluster-admin is workshop-grade

You're handing the provider a very broad credential. In a real cluster you'd narrow it to the kinds and namespaces that your charts need (Deployment, Service, ConfigMap in a specific namespace, say). Your workshop cluster is throwaway, so the blast radius is zero — but the production-grade pattern is a Role + RoleBinding.

7.3 Configure the provider ⏱️ 4m

A ProviderConfig tells the provider how to authenticate to its external API. For provider-helm the "external API" is the cluster's own apiserver, and the provider uses InjectedIdentity — its own ServiceAccount token, no extra secret to manage.

Crossplane v2 split ProviderConfig into two flavors:

KindScopeUse when…
ClusterProviderConfigClusterOne config that any namespace can reference. The "default" config in a single-tenant cluster.
ProviderConfigNamespacedOne config per namespace, isolated from other tenants. The right pick when each team's helm charts target a different registry or use different credentials.

For this module you'll apply both — the cluster-scoped one for general use and a namespaced one to demonstrate the v2 isolation pattern.

kubectl create namespace workshop-helm
kubectl apply -f - <<'EOF'
apiVersion: helm.m.crossplane.io/v1beta1
kind: ClusterProviderConfig
metadata:
name: default
spec:
credentials:
source: InjectedIdentity
---
apiVersion: helm.m.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: workshop-helm
namespace: workshop-helm
spec:
credentials:
source: InjectedIdentity
EOF

Verify:

kubectl get clusterproviderconfigs.helm.m.crossplane.io
kubectl get providerconfigs.helm.m.crossplane.io -n workshop-helm

Expected output (abridged):

NAME      AGE
default 2s

NAME AGE
workshop-helm 2s

You now have two valid configs. The next step picks which one to use.

7.4 Install a Helm chart through Crossplane ⏱️ 5m

A Release.helm.m.crossplane.io is the namespaced Managed Resource kind provider-helm v1.2.0 ships. Each Release corresponds to one helm installforProvider describes the chart and values, providerConfigRef points at the config the provider should use.

Use the namespaced ProviderConfig you just applied:

kubectl apply -f - <<'EOF'
apiVersion: helm.m.crossplane.io/v1beta1
kind: Release
metadata:
name: podinfo
namespace: workshop-helm
spec:
forProvider:
chart:
name: podinfo
repository: oci://ghcr.io/stefanprodan/charts
version: "6.7.1"
namespace: workshop-helm
values:
replicaCount: 1
providerConfigRef:
kind: ProviderConfig
name: workshop-helm
EOF

Two v2 details worth noticing:

  • providerConfigRef.kind is required in v2 — namespaced MRs can point at either a same-namespace ProviderConfig or a ClusterProviderConfig, so the kind has to be explicit. (If you change kind: ProviderConfig to kind: ClusterProviderConfig and name: workshop-helm to name: default, the Release will reconcile against the cluster-scoped config you also applied. Try it later if you're curious.)
  • The chart pulls over OCI (oci://ghcr.io/stefanprodan/charts). No helm repo add step happens here — provider-helm issues an OCI pull directly. This is the same protocol Helm 3.8+ supports natively.

Watch the Release reconcile:

kubectl get release.helm.m.crossplane.io -n workshop-helm

Expected output:

NAME      CHART     VERSION   SYNCED   READY   STATE      REVISION   DESCRIPTION        AGE
podinfo podinfo 6.7.1 True True deployed 1 Install complete 30s

Ready=True, STATE=deployed means the chart is installed. The provider has run the equivalent of helm install podinfo … and recorded the release in cluster state.

Confirm the workload is actually running:

kubectl get deploy,svc -n workshop-helm

Expected output:

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/podinfo 1/1 1 1 45s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/podinfo ClusterIP 10.43.x.x <none> 9898/TCP,9999/TCP 45s

Hit the podinfo /api/info endpoint via a quick port-forward:

kubectl port-forward -n workshop-helm svc/podinfo 9898:9898 &
sleep 2
curl -s http://localhost:9898/api/info | head -c 200
kill %1

Expected output (abridged):

{
"hostname": "podinfo-…",
"version": "6.7.1",
"color": "#34577c",
"message": "greetings from podinfo v6.7.1",

}

When the tile turns green, Crossplane has installed a real Helm chart through a Provider. Same lifecycle contract as an XHello XR or an XApplication XR — kubectl delete release podinfo -n workshop-helm and the chart goes with it.

7.5 What just happened

You installed your first Provider. provider-helm extended Crossplane with a new managed-resource kind (Release.helm.m.crossplane.io); a ClusterProviderConfig and a ProviderConfig told the provider how to authenticate; one Release MR drove helm install against the cluster.

The pattern is identical for every other Provider in the Crossplane Marketplaceprovider-aws-s3, provider-gcp-storage, provider-azure-storage, provider-kubernetes, dozens more. The package name and the kinds change; the Provider → ProviderConfig → MR shape doesn't.

You've now seen both halves of Crossplane:

  • Composition with a function (modules 4 + 5) — Crossplane core composes plain Kubernetes resources directly. No Provider needed.
  • Provider with a managed resource (this module) — a Provider package teaches Crossplane to manage an external API.

Real platforms mix both. The 2xx track has a module on adopting existing in-cluster resources via provider-kubernetes — same Provider/ProviderConfig/MR shape, different superpower (managementPolicies: [Observe] to take ownership of resources someone else created).

Go deeper