CNDI makes it as easy to deploy production application clusters on your infrastructure as it is to buy a managed PaaS. In this first Deployment Target breakdown, we’ll talk about how we deploy to AWS.
When deploying an application with CNDI, the first decision is where it should be hosted. CNDI introduces the concept of “Deployment Targets”. When you select a Deployment Target you are choosing where to deploy your cluster.
The first deployment target we implemented in the project was “ec2”, powered by MicroK8s. Our tool leverages Terraform to provision virtual machines in AWS’ EC2 and join them together in a microk8s cluster.
For those of you familiar with AWS’ other Kubernetes offerings this may come as a surprise, why not just use EKS?
If we build our cluster platform on top of what we at Polyseam call “pure compute” like “ec2”, our clusters can now be taken and moved to any other virtual machine platform and run in exactly the same way. In fact, changing the deployment target to “gcp” or “azure” is the only line of code that is needed to move your cluster from one to the next!
EKS charges an hourly fee per-cluster and it requires you use only specific EC2 instance types, which may not be the most cost-effective option for your workload. It also limits the number of Pods or workloads for each specific EC2 instance type. With CNDI, you have no per-cluster fee, no service premiums, and no limits on which instance types you use as nodes, or what you run on them.
However if the managed service from AWS is worth that cost to you, we have you covered too, because we just launched support for “eks” clusters as well!
Stay tuned to learn more about that, and all the other stuff we are bringing to the open source community each week.
What new deployment targets are interesting to you? Please check out our roadmap and let us know!