This RFD describes the most crucial Kubernetes integrations for Oxide and proposes a roadmap for implementation.
Background
Oxide customers run production workloads in containers using container runtimes such as Docker Engine or containerd. As the number of production workloads increases the complexity of managing containers increases as well. To manage this complexity, customers use container orchestration systems such as Kubernetes to automate the deployment, scaling, and management of containers.
Kubernetes is the most popular container orchestration system, evidenced by its dedicated KubeCon conference and its acceptance into the Cloud Native Computing Foundation (CNCF). The major cloud providers (i.e., AWS, GCP, Azure) and VMware all offer some or all of the integrations listed in this RFD to deploy and manage Kubernetes on their platforms.
Customers have come to expect these integrations across Kubernetes offerings and have requested Oxide implement the integrations listed in this RFD in order to be a compelling choice for running Kubernetes.
Goals
The goals of this RFD are as follows.
Describe the different ways Oxide can integrate with Kubernetes and the customer problems those integrations solve.
Propose which Kubernetes integrations Oxide should start building now and which integrations can be deferred to a later date.
The Kubernetes landscape is quite large so this RFD does not enumerate all possible Kubernetes integrations but instead focuses on integrations that have become defacto standards for Kubernetes.
In its terminal state, this RFD will serve as a foundational resource when implementing Kubernetes integrations for Oxide.
Kubernetes Overview
This section describes how Kubernetes works at a high level, lists the different integration points in Kubernetes, and notes the difference between native Kubernetes integrations and integrations for Kubernetes distributions.
Kubernetes Components
Kubernetes runs as a cluster of control plane nodes and worker nodes that each have various Kubernetes components installed. These Kubernetes components serve the Kubernetes API, run controllers to perform logic in response to Kubernetes API changes, and schedule containers to run on its nodes.
Control Plane Components
The following components run exclusively on control plane nodes.
kube-apiserver
- Serves the Kubernetes API and provides different Kubernetes resources (e.g.,Pod
,Service
).etcd
- Distributed key-value store for API data.kube-scheduler
- SchedulesPod
resources to run on nodes. APod
is collection of one or more containers.kube-controller-manager
- Runs Kubernetes controllers that implement behavior for built-in Kubernetes resources.cloud-controller-manager
(optional) - Runs Kubernetes controllers that integrate with an underlying cloud provider (e.g., Oxide).
Control Plane & Worker Components
The following components run on both control plane nodes and worker nodes.
kubelet
- EnsuresPod
resources are running on the node in accordance with the data retrieved from the Kubernetes API.kube-proxy
(optional) - Maintains network rules to implementService
resources. AService
exposes applications running on pods behind a frontend IP.Container Runtime - Software responsible for running containers on a node (e.g., Docker Engine, containerd).
Interacting with Kubernetes
A user typically interacts with Kubernetes via the following process.
The user creates a Kubernetes YAML manifest describing the desired state of a Kubernetes resource (e.g.,
Pod
).---
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: foo:latestThe user uses
kubectl
to write that manifest to the Kubernetes API.kubectl apply --filename foo.yaml
The Kubernetes API saves the resource’s desired state to
etcd
so that future requests to the Kubernetes API return a consistent state.A Kubernetes controller that has been watching the Kubernetes API in a loop sees this resource’s desired state does not match reality (e.g., the pod is not running anywhere).
The Kubernetes controller begins to reconcile reality with the desired state. This is usually done by making requests to the Kubernetes API to trigger other Kubernetes components.
These other Kubernetes components perform the necessary actions to ensure reality matches the desired state (e.g., the pod must be running somewhere). Here are some of those actions.
The
kube-scheduler
schedules the pod to run on a specific node.The
kubelet
on that specific node tells the container runtime to run the pod’s containers.The container runtime runs the requested containers.
At this point the Kubernetes controller has successfully reconciled reality with the desired state. If anything changes (e.g., user writes a new manifest, node experiences an issue) the controller will reconcile things again.
Kubernetes' declarative API combined with controllers that reconcile reality with desired state makes Kubernetes an eventually consistent platform. That is, assuming no fatal errors occur, reality will eventually converge to the state declared by the desired state.
Kubernetes Integration Points
The following diagram shows the integration points in Kubernetes and lists whether the integration point is in scope or out of scope for this RFD.

Diagram Key | In RFD Scope? | Description |
---|---|---|
1 | No | Users often interact with the Kubernetes API using the |
2 | No | The Kubernetes API server can be extended with webhooks to control authentication and authorization logic. |
3 | Yes | The Kubernetes API provides different kinds of built-in resources. Custom resources can be added to the Kubernetes API through API extensions. |
4 | No | The Kubernetes scheduler decides which nodes to place pods on and can be extended with custom scheduling logic. |
5 | Yes | Controllers interact with the resources provided by the Kubernetes API to reconcile reality with desired state and perform various actions (e.g., creating pods). Custom controllers can be created to provide custom functionality. |
6 | Yes | Network plugins allow for different implementations of pod networking. |
7 | Yes | Device plugins are used to integrate custom hardware into Kubernetes. Storage plugins are used to add support for different storage devices. |
Native Kubernetes vs Kubernetes Distributions
Kubernetes Integration Points described the different integration points in Kubernetes. Integrations built for one of these points are meant to work across all Kubernetes installations, regardless whether it is a manual bare-metal installation or an installation using a managed Kubernetes offering from a cloud provider. This RFD refers to these types of integrations as native Kubernetes integrations and they are the focus of this RFD.
There are also integrations that are specific to Kubernetes distributions. A Kubernetes distribution is a packaging of Kubernetes components that a user can self-host on their own infrastructure. Popular Kubernetes distributions include Rancher, OpenShift, and VMware Tanzu Kubernetes Grid. Kubernetes distributions are not to be confused with managed Kubernetes offerings, which are briefly described in Managed Kubernetes. Integrations for Kubernetes distributions do not work outside the Kubernetes distribution they were created for. This RFD will mention some of these integrations but they are not the focus for this RFD.
Managed Kubernetes
Cloud providers offer what is known as managed Kubernetes, a SaaS offering of Kubernetes that runs on, and is managed by, the respective cloud provider. AWS, GCP, and Azure each have their own managed Kubernetes offerings that this RFD will reference in the context of integrations.
Oxide does not have plans to offer managed Kubernetes to users at this time.
Native Kubernetes Integrations
This section describes native Kubernetes integrations for Oxide.
Cloud Controller Manager
The Cloud Controller Manager (CCM) is a control plane component that contains Kubernetes controllers that interact with a cloud provider API. The following diagram shows how the Cloud Controller Manager interacts with the rest of the Kubernetes components.

The Cloud Controller Manager implements the following Kubernetes controllers.
Node Controller - Responsible for updating
Node
resources as nodes are added or removed from the Kubernetes cluster. This controller uses the cloud provider API to determine whether a node has been deleted from the cloud provider to avoid mistakenly deleting aNode
resource in the event of a network partition or temporary outage.Route Controller - Responsible for configuring routes in the cloud provider so that pods running on different nodes can communicate with one another. This can remain unimplemented until Oxide implements its own Container Network Interface.
Service Controller - Responsible for configuring cloud provider infrastructure such as load balancers and IP addresses when a
Service
of typeLoadBalancer
is created. This controller will allow Kubernetes users to map an Oxide external IP to a Kubernetes service and update a load balancer with new backend targets as pods enter and leave a Kubernetes service.
The Cloud Controller Manager must be deployed to a Kubernetes cluster before it can be used. Ultimately, the Cluster API can automate such deployments but until that is implemented prescriptive documentation will need to be provided to Kubernetes users describing how to deploy the Cloud Controller Manager. The Cloud Controller Manager Administration documentation provides more details on deploying a Cloud Controller Manager to a Kubernetes cluster.
Development Considerations
The Cloud Controller Manager for Oxide requires the following.
A mechanism for allowing external clients to reach Kubernetes nodes over TCP or UDP. Ideally this mechanism would be an Oxide Load Balancer but external IPs or a software load balancer (e.g., Nginx, HAProxy) can work well for an initial implementation. A Kubernetes
Service
of typeLoad Balancer
creates aService
of typeNodePort
under the hood which tells every Kubernetes node in the cluster to listen on a given TCP or UDP port. Traffic received on that TCP or UDP port on any node will be routed to the correct Kubernetes service using the Container Network Interface.The following Oxide APIs.
If Oxide implemented resource metadata (e.g., tags, labels) then such metadata could be added to instances managed by the Cloud Controller Manager. That metadata could then be used to group Kubernetes resources in Oxide, prevent deletion of such resources, or other quality of life enhancements.
Development Effort
Developing the Cloud Controller Manager for Oxide is a medium project that will span several weeks.
The Developing Cloud Controller Manager documentation describes how to develop a Cloud Controller Manager integration and links to the cloudprovider.Interface that must be implemented.
type Interface interface {
Initialize(clientBuilder ControllerClientBuilder, stop <-chan struct{})
LoadBalancer() (LoadBalancer, bool)
Instances() (Instances, bool)
InstancesV2() (InstancesV2, bool)
Zones() (Zones, bool)
Clusters() (Clusters, bool)
Routes() (Routes, bool)
ProviderName() string
HasClusterID() bool
}
Risks in Deferring
There’s moderate risk in deferring the implementation of the Cloud Controller Manager. Without the Cloud Controller Manager customers will have to take on the following responsibilities.
Find a mechanism to expose Kubernetes services to external clients and either keep that mechanism updated manually or create custom automation as Kubernetes nodes and services are added and removed.
Monitor Kubernetes node health manually to determine whether or not nodes have been deleted in Oxide and should be removed from the Kubernetes cluster. Normally the Cloud Controller Manager queries the cloud provider API for instance state but a user would need to manually do this if no Cloud Controller Manager exists. As an example, the VMware Cloud Controller Manager queries for instance state in its InstanceExistsByProviderID method which is one of the methods in the
Instances
orInstancesV2
interface for the Cloud Controller Manager.
Cluster API
The Cluster API (CAPI) allows users to manage Kubernetes clusters using Kubernetes itself. The following excerpt from the Cluster API Book describes the Cluster API quite well.
Started by the Kubernetes Special Interest Group (SIG) Cluster Lifecycle , the Cluster API project uses Kubernetes-style APIs and patterns to automate cluster lifecycle management for platform operators. The supporting infrastructure, like virtual machines, networks, load balancers, and VPCs, as well as the Kubernetes cluster configuration are all defined in the same way that application developers operate deploying and managing their workloads. This enables consistent and repeatable cluster deployments across a wide variety of infrastructure environments.
Developing the Cluster API provider for Oxide will simplify Kubernetes cluster management for customers, especially customers that manage many Kubernetes clusters at scale or frequently deploy ephemeral Kubernetes clusters via automation. There is a Cluster API provider for AWS, GCP, Azure, and VMware, that powers their respective managed Kubernetes offerings and allows customers to manage Kubernetes clusters on their platforms. If customers are already using the Cluster API for these platforms then the Cluster API provider for Oxide will significantly simplify the migration of Kubernetes workloads to Oxide.
Once the Cluster API provider for Oxide is implemented a user would use a Kubernetes YAML manifest like the following to create a Kubernetes cluster.
# The `Cluster` resource contains configuration for the Kubernetes cluster.
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: oxide-cluster01
spec:
clusterNetwork:
services:
cidrBlocks: ["10.96.0.0/12"]
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: OxideCluster
name: oxide-cluster01
# The `OxideCluster` resource contains configuration used to create the
# Kubernetes cluster on Oxide. A controller will be watching this resource for
# changes to perform the necessary reconcilation logic.
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: OxideCluster
metadata:
name: oxide-cluster01
spec: {}
Development Considerations
The Cluster API provider for Oxide requires the following.
A Cloud Controller Manager to manage load balancers and node health within Oxide. The Cluster API provider will deploy this Cloud Controller Manager into the managed Kubernetes cluster.
The following Oxide APIs.
Development Effort
Developing the Cluster API provider for Oxide is a large project that will span several months.
The Developing Cluster API Providers chapter of the Cluster API Book describes how to develop a Cluster API provider. A Cluster API provider must respect one or more of the following Cluster API Provider Contracts, depending on the desired functionality.
If Oxide implemented resource metadata (e.g., tags, labels) then such metadata could be added to clusters managed by the Cluster API. That metadata could then be used to group Kubernetes resources in Oxide, prevent deletion of such resources, or other quality of life enhancements.
Risks in Deferring
There’s minimal risk in deferring the implementation of the Cluster API provider. Customers have the following alternative options to manage Kubernetes clusters on Oxide.
Manually create and manage Kubernetes clusters using
kubeadm
.Use Rancher and its Rancher Kubernetes Engine (RKE). There are tutorials that cover installing and using Rancher on virtual machines. Multiple customers have expressed interest using Rancher to manage their Kubernetes clusters
Use
talosctl
. A Talos Linux example can be found in the oxidecomputer/tf-configs repository.Use Terraform. A Kubernetes example can be found in the oxidecomputer/tf-configs repository.
Container Network Interface
The Container Network Interface (CNI) defines a specification and libraries for writing plugins that implement container networking within Kubernetes. The following excerpt from the official website describes the Container Network Interface quite well.
CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux and Windows containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.
There are many CNI plugins available for container networking, each with their own unique set of capabilities. AWS and Azure develop and maintain custom CNI plugins for their managed Kubernetes offerings while GCP and Digital Ocean use open source, third-party CNI plugins for their offerings.
The following table shows the supported CNI plugins for different managed Kubernetes offerings.
Managed Kubernetes Offering | Supported CNI Plugins |
---|---|
Elastic Kubernetes Service (EKS) | |
Google Kubernetes Engine (GKE) | |
Azure Kubernetes Service (AKS) | |
VMware Tanzu Kubernetes Grid | |
Digital Ocean Kubernetes (DOKS) |
Managed Kubernetes offerings generally do not let users change the CNI plugin. However, platforms that offer managed Kubernetes make it a point to note that no such restrictions exist when deploying self-managed Kubernetes on their platform (e.g., Kubernetes on EC2 instances). In those cases, users are free to choose their preferred CNI plugin.
The following is a list of the most popular open source, third-party CNI plugins.
Third-party CNI plugins are the preferred way to implement container networking in Kubernetes. Major cloud providers even use third-party CNI plugins for their managed Kubernetes offerings instead of developing their own CNI plugin. With such maturity and production stability of third-party CNI plugins, it does not make sense for Oxide to develop a custom CNI plugin at this time.
Development Considerations
The Container Network Interface for Oxide requires the following.
Additional research to identify use cases that an existing third-party CNI plugin does not already satisfy.
Adhering to the Container Network Interface Specification.
Implementing the Kubernetes Network Model and meeting the Network Plugin Requirements.
Development Effort
Developing the Container Network Interface plugin for Oxide is a large project that requires more research to produce a more accurate estimate.
Risks in Deferring
There is no risk in deferring the implementation of the Container Network Interface plugin. Cloud providers such as GCP and Digital Ocean do not implement their own CNI plugin and customers have the following alternative options to implement container networking for Kubernetes.
Use a third-party CNI plugin like Calico, Cilium, or Flannel. This is the preferred option.
Use the CNI plugin provided by the Kubernetes cluster manager (e.g., Rancher). Generally this is one of the third-party CNI plugins mentioned above that’s automatically managed by the Kubernetes cluster manager.
Container Storage Interface
The Container Storage Interface (CSI) defines a specification for writing plugins that expose storage systems to Kubernetes. The following excerpt from the Container Storage Interface (CSI) for Kubernetes GA blog post describes the Container Storage Interface quite well.
CSI was developed as a standard for exposing arbitrary block and file storage storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. With the adoption of the Container Storage Interface, the Kubernetes volume layer becomes truly extensible. Using CSI, third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code.
The major cloud providers have their own CSI plugins that can be used with their managed Kubernetes offerings or self-hosted Kubernetes clusters running on their platforms. The following table lists the CSI plugins for the common managed Kubernetes offerings.
Managed Kubernetes Offering | CSI Plugins |
---|---|
Elastic Kubernetes Service (EKS) | |
Google Kubernetes Engine (GKE) | |
Azure Kubernetes Service (AKS) | |
VMware Tanzu Kubernetes Grid | |
Digital Ocean Kubernetes (DOKS) |
Unlike CNI plugins, managed Kubernetes offerings allow users to add additional CSI plugins to their Kubernetes cluster. For self-hosted Kubernetes clusters, users commonly deploy an open source, third-party CSI plugin.
The following is a list of the most popular open source, third-party CSI plugins.
Oxide’s block storage service would be an ideal target to create a CSI plugin for. Once the CSI plugin for Oxide is implemented a user would use a Kubernetes YAML manifest like the following to request a volume and attach it to a container.
# Define a storage class for the CSI plugin to use.
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: oxide-storage-class
provisioner: csi.oxide.computer
volumeBindingMode: WaitForFirstConsumer
# Request persistent volume from Oxide.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oxide-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: oxide-storage-class
resources:
requests:
storage: 100Gi
# Attach the requested volume to a container.
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: debian:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /mnt/data/output.log; sleep 5; done"]
volumeMounts:
- name: example-data
mountPath: /mnt/data
volumes:
- name: example-data
persistentVolumeClaim:
claimName: oxide-claim
Development Considerations
The Container Storage Interface plugin for Oxide requires the following.
Oxide API to update a disk. This is mainly to update attributes of the disk (e.g., change the name).
Oxide API to resize a disk. This resize must happen without restarting the instance that the disk is attached to so that the CSI plugin can act upon the resize (e.g., expand file system).
Support a higher number of attached disks per instance. Currently, a maximum of 8 disks can be attached to a single instance but this limit would need to be raised for larger Kubernetes clusters.
If Oxide implemented resource metadata (e.g., tags, labels) then such metadata could be added to disks managed by the Container Storage Interface. That metadata could then be used to group Kubernetes resources in Oxide, prevent deletion of such resources, or other quality of life enhancements.
Development Effort
Developing the Container Storage Interface plugin for Oxide is a medium project that will span several months.
The Kubernetes CSI Developer Documentation describes how to develop a Container Storage Interface plugin.
Risks in Deferring
There’s minimal risk in deferring the implementation of the Container Storage Interface plugin. Customers have the following alternative options to manage Kubernetes clusters on Oxide.
Deploy a third-party CSI plugin to the Kubernetes cluster. Longhorn is a popular choice due to being highly available, persistent, and supporting snapshots.
Kubernetes Distribution Integrations
This section describes Kubernetes distribution integrations for Oxide.
Rancher Node Driver
The Rancher Node Driver is an implementation of Rancher Machine that allows Rancher to provision nodes to launch and manage Kubernetes clusters.
Oxide customers have requested a Rancher Node Driver for Oxide to onboard their Kubernetes workloads to Oxide. One customer even contributed the initial implementation of the Rancher Node Driver for Oxide in oxidecomputer/rancher-machine-driver-oxide#1.
Implementations of the Rancher Node Driver implement the drivers.Driver interface, shown below, and register themselves as Rancher plugins by calling plugin.RegisterDriver.
type Driver interface {
Create() error
DriverName() string
GetCreateFlags() []mcnflag.Flag
GetIP() (string, error)
GetMachineName() string
GetSSHHostname() (string, error)
GetSSHKeyPath() string
GetSSHPort() (int, error)
GetSSHUsername() string
GetURL() (string, error)
GetState() (state.State, error)
Kill() error
PreCreateCheck() error
Remove() error
Restart() error
SetConfigFromFlags(opts DriverOptions) error
Start() error
Stop() error
}
Once a Rancher Node Driver binary is built it must be deployed to Rancher before it can be used. This is covered in the Rancher Node Driver documentation.
Development Considerations
The Rancher Node Driver for Oxide requires the following.
The following Oxide APIs.
If Oxide implemented resource metadata (e.g., tags, labels) then such metadata could be added to instances managed by the Rancher Node Driver. That metadata could then be used to group Kubernetes resources in Oxide, prevent deletion of such resources, or other quality of life enhancements.
Development Effort
Oxide already has a Rancher Node Driver. The source code can be found in the oxidecomputer/rancher-machine-driver-oxide GitHub repository.
Risks in Deferring
Not applicable. There already is a Rancher Node Driver for Oxide.
Proposed Roadmap
This section describe the proposed development roadmap for implementing Kubernetes integrations for Oxide.
Phase 1
This phase can be started immediately. The work during this phase will help enumerate implemetation details for future work on the Cluster API while providing immediate value to customers.
Implement the Cloud Controller Manager. This is a medium project and represents the bulk of the work for this phase. Once finished, Kubernetes clusters running on Oxide will be able to use the Cloud Controller Manager to interact with the Oxide API to manage
Service
andNode
resources. For example, users can createService
resources of typeLoadBalancer
and the Service controller within the Cloud Controller Manager can allocate an external IP to expose that service’s node port to external clients. Additionally, the Node controller within the Cloud Controller Manager will initialize a node as it joins the Kubernetes cluster and periodically reach out to the Oxide API to refresh the current state of the node as seen from Oxide.Implement a Rancher UI Extension per oxidecomputer/rancher-machine-driver-oxide#10. This adds Oxide branding and configuration validation to the Rancher Node Driver.
Fix the following quality-of-life issues with the Rancher Node Driver.
Phase 2
This phase can be started immediately after Phase 1. The work during this phase may be preempted by work from future phases depending on learnings and changes to customer requirements. For example, it may be determined that implementing a Container Storage Interface is more urgent than the work described here.
Implement the Cluster API. This is a large project and represents the bulk of the work for this phase. Once finished, customers will be able to use Kubernetes to deploy Kubernetes clusters on Oxide.
Phase 3
This phase can be started immediately after Phase 2 unless urgency or priority changes result in preempting the work in Phase 2.
Implement the Container Storage Interface. This is a medium project and represents the bulk of the work for this phase. Once finished, customers will be able to use Oxide storage within their Kubernetes clusters.
Unplanned Work
There are no plans to implement the following Kubernetes integrations as part of this initial integrations work.
External References
[ccm-vmware-instance-exists] https://github.com/kubernetes/cloud-provider-vsphere/blob/ed2b673aed34993ca545068dcb8cd9a3e9241631/pkg/cloudprovider/vsphere/instances.go#L153-L194
[cloud-controller-manager-administration] https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager
[cloud-controller-manager-developing] https://kubernetes.io/docs/tasks/administer-cluster/developing-cloud-controller-manager/
[cloud-controller-manager] https://kubernetes.io/docs/concepts/architecture/cloud-controller/
[cluster-api-book] https://cluster-api.sigs.k8s.io/
[cluster-api-provider-aws] https://github.com/kubernetes-sigs/cluster-api-provider-aws
[cluster-api-provider-azure] https://github.com/kubernetes-sigs/cluster-api-provider-azure
[cluster-api-provider-contract-bootstrapconfig] https://cluster-api.sigs.k8s.io/developer/providers/contracts/bootstrap-config
[cluster-api-provider-contract-clusterctl] https://cluster-api.sigs.k8s.io/developer/providers/contracts/clusterctl
[cluster-api-provider-contract-controlplane] https://cluster-api.sigs.k8s.io/developer/providers/contracts/control-plane
[cluster-api-provider-contract-infracluster] https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-cluster
[cluster-api-provider-contract-inframachine] https://cluster-api.sigs.k8s.io/developer/providers/contracts/infra-machine
[cluster-api-provider-contract-ipam] https://cluster-api.sigs.k8s.io/developer/providers/contracts/ipam
[cluster-api-provider-contracts] https://cluster-api.sigs.k8s.io/developer/providers/contracts/overview
[cluster-api-provider-development] https://cluster-api.sigs.k8s.io/developer/providers/overview
[cluster-api-provider-gcp] https://github.com/kubernetes-sigs/cluster-api-provider-gcp
[cluster-api-provider-vmware] https://github.com/kubernetes-sigs/cluster-api-provider-vmware
[cluster-api] https://github.com/kubernetes-sigs/cluster-api
[cncf] https://www.cncf.io/
[cni-amazon-vpc] https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
[cni-antrea] https://www.vmware.com/products/cloud-infrastructure/antrea-container-networking
[cni-azure] https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni?tabs=configure-networking-portal
[cni-calico] https://www.tigera.io/project-calico
[cni-cilium] https://cilium.io/
[cni-flannel] https://github.com/flannel-io/flannel
[cni-requirements] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements
[cni-spec] https://github.com/containernetworking/cni/blob/3b4dfc5dffa2afac26d2c083ca68cbe9b5837097/SPEC.md
[cni] https://www.cni.dev/
[containerd] https://containerd.io/
[csi-aws-ebs] https://github.com/kubernetes-sigs/aws-ebs-csi-driver
[csi-aws-efs] https://github.com/kubernetes-sigs/aws-efs-csi-driver
[csi-aws-file-cache]https://github.com/kubernetes-sigs/aws-file-cache-csi-driver
[csi-aws-fsx-openzfs] https://github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver
[csi-aws-fsx] https://github.com/kubernetes-sigs/aws-fsx-csi-driver
[csi-azuredisk] https://github.com/kubernetes-sigs/azuredisk-csi-driver
[csi-azurefile] https://github.com/kubernetes-sigs/azurefile-csi-driver
[csi-azurelustre] https://github.com/kubernetes-sigs/azurelustre-csi-driver
[csi-blob] https://github.com/kubernetes-sigs/blob-csi-driver
[csi-blog] https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/
[csi-developer-documentation] https://kubernetes-csi.github.io/docs/
[csi-digitalocean] https://github.com/digitalocean/csi-digitalocean
[csi-gcp-compute-persistent-disk] https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver
[csi-gcp-filestore] https://github.com/kubernetes-sigs/gcp-filestore-csi-driver
[csi-gcp-secrets-store] https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
[csi-gcs-fuse] https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver
[csi-longhorn] https://longhorn.io/
[csi-openebs] https://openebs.io/
[csi-rook] https://rook.io/
[csi-vsphere] https://github.com/kubernetes-sigs/vsphere-csi-driver
[csi] https://github.com/container-storage-interface/spec/blob/98819c45a37a67e0cd466bd02b813faf91af4e45/spec.md
[docker-engine] https://docs.docker.com/engine/
[k8s-extension-points] https://kubernetes.io/docs/concepts/extend-kubernetes/
[k8s-interface-cloudprovider-interface] https://github.com/kubernetes/cloud-provider/blob/5c0b9854b1b24a5950aeaa17003e78b98709d708/cloud.go#L43-L69
[k8s-interface-drivers-driver] https://github.com/rancher/machine/blob/50f36fa166490c796d5dcf3146b9b38a963387ee/libmachine/drivers/drivers.go#L12-L74
[k8s-interface-plugin-registerdriver] https://github.com/rancher/machine/blob/50f36fa166490c796d5dcf3146b9b38a963387ee/libmachine/drivers/plugin/register_driver.go#L22-L63
[k8s-network-model] https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model
[k8s-sig-cluster-lifecycle] https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme
[k8s] https://kubernetes.io/
[kubecon] https://www.cncf.io/kubecon-cloudnativecon-events/
[openshift] https://www.redhat.com/en/technologies/cloud-computing/openshift
[oxide-api-disk-delete] https://docs.oxide.computer/api/disk_delete
[oxide-api-floating-ip-attach] https://docs.oxide.computer/api/floating_ip_attach
[oxide-api-floating-ip-create] https://docs.oxide.computer/api/floating_ip_create
[oxide-api-floating-ip-delete] https://docs.oxide.computer/api/floating_ip_delete
[oxide-api-floating-ip-detach] https://docs.oxide.computer/api/floating_ip_detach
[oxide-api-floating-ip-list] https://docs.oxide.computer/api/floating_ip_list
[oxide-api-floating-ip-update] https://docs.oxide.computer/api/floating_ip_update
[oxide-api-floating-ip-view] https://docs.oxide.computer/api/floating_ip_view
[oxide-api-instance-create] https://docs.oxide.computer/api/instance_create
[oxide-api-instance-delete] https://docs.oxide.computer/api/instance_delete
[oxide-api-instance-nic-list] https://docs.oxide.computer/api/instance_network_interface_list
[oxide-api-instance-list] https://docs.oxide.computer/api/instance_list
[oxide-api-instance-reboot] https://docs.oxide.computer/api/instance_reboot
[oxide-api-instance-start] https://docs.oxide.computer/api/instance_start
[oxide-api-instance-stop] https://docs.oxide.computer/api/instance_stop
[oxide-api-instance-view] https://docs.oxide.computer/api/instance_view
[oxide-api-ssh-key-create] https://docs.oxide.computer/api/current_user_ssh_key_create
[oxide-api-ssh-key-delete] https://docs.oxide.computer/api/current_user_ssh_key_delete
[oxidecomputer-tf-configs] https://github.com/oxidecomputer/tf-configs
[rancher-machine-driver-oxide-10] https://github.com/oxidecomputer/rancher-machine-driver-oxide/issues/10
[rancher-machine-driver-oxide-1] https://github.com/oxidecomputer/rancher-machine-driver-oxide/pull/1
[rancher-machine-driver-oxide-2] https://github.com/oxidecomputer/rancher-machine-driver-oxide/issues/2
[rancher-machine-driver-oxide-4] https://github.com/oxidecomputer/rancher-machine-driver-oxide/issues/4
[rancher-machine-driver-oxide] https://github.com/oxidecomputer/rancher-machine-driver-oxide
[rancher-machine] https://github.com/rancher/machine
[rancher-node-driver] https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers
[rancher-rke2-longhorn] https://ranchergovernment.com/blog/article-simple-rke2-longhorn-and-rancher-install
[rancher-ui-extensions] https://extensions.rancher.io
[rancher] https://www.rancher.com/
[vmware-tanzu-kubernetes-grid] https://www.vmware.com/products/app-platform/tanzu-kubernetes-grid