Amazon Web Services recently announced their new product Elastic Kubernetes Service, which unfortunately has some extra costs associated with it. I, along with thousands of people, waited at 11am PST to hear about the new container service from Amazon. To those who follow HackerNews someone supplied the links to the actual dashboard which included the pricing. The pricing of EKS starts at a flat rate of $0.2/hr, which is cheap if you consider it's an autoscaling set kubernetes masters and etc.d, but when you consider that all other cloud platforms do it for free I started to question if I can really wait on AWS to bring the price down.
AWS EKS with a monthly cost of $150 has priced me out of AWS for kubernetes, which is my goal by the end of 2018 to have all of my services on kubernetes. With initial testing on google cloud I've managed to define an infrastructure using Traefik as an external load balancer. Surprisingly there is almost no guides or documentation on using traefik as an external load balancer. Anyway, the decision to use traefik is because a Google Cloud load balancer is $18/mo with a single rule, and since I have multiple websites this could either turn into a single LB per app (with how k8 works out of the box) or a more expensive LB to a nonscaling cluster. So I thought why not choose the best of both worlds and build with the intention of having a load balancer but have it be undersized. Enter the f1-micro instance which is $4.8/mo, it can probably handle some amount of traffic before dying. There have been no tests currently on bandwidth.
The infrastructure goals are to have a single production grade (alibet degredated) kubernetes cluster and segment environments via namespaces. Until sufficient scale this would be a cost effective way to have a development/staging and production setup. Another benefit with using traefik is native support for Let's Encrypt. This would allow me to terminate SSL one hop closer to the instances running the applications. A huge benefit towards security. Deployments also become much easier to handle as they're as simple as building the container, pushing to the repo and updating the k8 deployment. This has proven to be trivial compared to the current lengthy setup. A deployment can be done within minutes vs an hour, and it has the added benefit of rolling updates.
Q2 2019 Update
The migration was completed late in 2018 and Cryecss.net is running on Digital Ocean. Moardammit and all new projects will be running within GCP on Kubernetes. The videos for Moardammit are still stored on AWS as the plan to migrate the swf archive when it's going to be dying in 2020 leaves much to be desired.