Not everything is perfect - Frustrations with AWS
May 3, 2020
While I may love building with AWS, it’s not always rainbows and sunshine
Those that know me well, know for starters that I love cloud computing. They also probably know that Amazon Web Services is my favourite provider to use. I have experience of four providers total, which includes the big three (AWS, Azure, GCP) with AWS being the most experienced. You could say its just comfort in what I know, conceptually I can figure out AWS a lot easier than say GCP. With work I have been using GCP a lot more than Azure lately and its number two on my list. Google Kubernetes Engine is just the gold standard when it comes to Kubernetes and it makes using K8s so much easier. Nevertheless, I still chose to use Elastic Kubernetes Service on AWS which has its quirks. In fact, there is several quirks in AWS that have thrown me from time to time, leading to incredibly frustrating moments. Here, is where I talk about a few of these moments.
Elastic Container Service, harder than Kubernetes
I could put this down to just straight up inexperience. But trying to get started with ECS was an absolute nightmare. I was experimenting with the service being my medium-term solution for container orchestration. EKS costs around 75USD per month for the management plane whereas ECS is free and you only pay for compute. I was able to get relative hello world examples to work but attempts at grasping how the service works for my own needs failed. It reached the stage that I the time spent troubleshooting would be greater than the time spent standing up a new managed Kubernetes cluster. I know that the world going forward is using Kubernetes, this was more an attempt to save some money while still having containers be orchestrated. ECS probably is not going anywhere any time soon, but I would reckon we will see the usage numbers start to go down.
EKS and CodeBuild
CodeBuild for me once I got over the initial configuration hurdles, works quite well. Most of my CI happens on Github Actions nowadays but as we all know now, I do run an EKS Cluster. So, in my mind, I figured that it would be far easier to allow CodeBuild and EKS to work together than EKS and Github Actions. Sufficed to say, it took a whole lot more to get there. This could be a failing on me, but I completely expected being able to route traffic completely internally to my VPCs. Both VPCs were peered, yet whenever I tried to connect to the cluster over private endpoints, things failed completely. Once I got past that hurdle, the next one was figuring out exactly how IAM works in EKS. Fortunately for me, I found an AWS blog post that described what I was looking to accomplish. It would have been nice if I could just add an additional policy to my CodeBuild IAM role that when it tries to authenticate with the cluster, the CodeBuild role would have the user in place. Instead, one needs to jump through a hoop or two to get things configured cluster side first.
AWS Ingress Controller Woes
I argued in my final year project that Kubernetes is one of the great ways of staying agnostic to your cloud provider. You know, once you were using plain ole Kubernetes, you could move to any other certified Kubernetes provider and get up and running again. I started to contradict myself when I began looking at the AWS Ingress Controller to replace Nginx. It was nothing against Nginx, but I just wanted to use an Application level load balancer versus a Network level one. I wished to take advantage of my EC2 security groups and be able to configure the load balancer with one to better control traffic in and out of the cluster. As it turns out though as of writing, the AWS Ingress Controller will create a load balancer per Ingress resource. So, with Nginx if I have three Ingress objects, they all map to the one load balancer. If I were to use the AWS controller, I would have three load balancers to pay for. Unless I wanted to centralise all Ingress traffic and mappings under one load balancer, I would be greatly increasing my monthly cost. I can see the appeal of a load balancer per Ingress to an extent, but surely just make it an option one can have via an Annotation or something. Fortunately for me, I have an experiment with Nginx Ingress and allowing specific IP ranges, so we will need to see how that goes.
Overall, I probably still have a lot more learning to do to avoid some annoyances. But I think when it comes to some of our tool choices, we all have these little things that we want fixed. Maybe we just got to bring it up and hopefully see it is not just us who has these issues!
Thank you!
You could of consumed content on any website, but you went ahead and consumed my content, so I'm very grateful! If you liked this, then you might like this other piece of content I worked on.
Understanding how EC2 billing worksPhotographer
I've no real claim to fame when it comes to good photos, so it's why the header photo for this post was shot by Taylor Vick . You can find some more photos from them on Unsplash. Unsplash is a great place to source photos for your website, presentation and more! But it wouldn't be anything without the photographers who put in the work.
Find Them On UnsplashSupport what I do
I write for the love and passion I have for technology. Just reading and sharing my articles is more than enough. But if you want to offer more direct support, then you can support the running costs of my website by donating via Stripe. Only do so if you feel I have truly delivered value, but as I said, your readership is more than enough already. Thank you :)
Support My Work