Jay Taylor's notes

back to listing index

Google Kubernetes Engine is introducing a cluster management fee on June 6 | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: Google kubernetes gke frontline-googlers news.ycombinator.com
Clipped on: 2020-03-04

Image (Asset 1/3) alt=

Image (Asset 2/3) alt=
Just received an email from Google Cloud -

"On June 6, 2020, your Google Kubernetes Engine (GKE) clusters will start accruing a management fee, with an exemption for all Anthos GKE clusters and one free zonal cluster.

Starting June 6, 2020, your GKE clusters will accrue a management fee of $0.10 per cluster per hour, irrespective of cluster size and topology.

We’re making some changes to the way we offer Google Kubernetes Engine (GKE). Starting June 6, 2020, your GKE clusters will accrue a management fee of $0.10 per cluster per hour, irrespective of cluster size and topology. We’re also introducing a Service Level Agreement (SLA) that’s financially backed with a guaranteed availability of 99.95% for regional clusters and 99.5% for zonal clusters running a version of GKE available through the Stable release channel. Below, you’ll find additional details about the new SLA and information to help you reduce your costs."

Image (Asset 3/3) alt=
This is awful - I don’t think GCP is fully aware of their position in the market as the second, inferior choice. I took a bet on the underdog by using GCP and they bit me back in return. Especially considering their ‘default’ kubernetes config automatically sets you up with three(!) control planes in replication, that’s, as far as I understand, $~300 added to our monthly bill, for nothing.

Oh, and per their docs, three-control-planes decision is not reversible - I cannot in fact shut two of those down without shutting down my production cluster and starting a new one. https://cloud.google.com/kubernetes-engine/docs/concepts/typ...

Awful. Just so awful.

Edit: To answer some questions below - we have a single-tenant model where we run an instance of our async discussion tool (https://aether.app) per customer for better isolation, that’s why we had bought into Kubernetes / GCP. Since we have our own hypervisor inside the clusters, it makes me wonder whether we can just deploy multiple hypervisors into the same cluster, or remove the Kubernetes dependency and run this on a Docker runtime in a more classical environment.

Thank you for the feedback. The management fee is per cluster. You are not billed for replicated control planes. You can use the pricing calculator at https://cloud.google.com/products/calculator#tab=container to model pricing, but it should work out to $73/mo regardless of nodes or cluster size (again, because it's charged per-cluster).

There's also one completely free zonal cluster for hobbyist projects.

Seth — I appreciate you being here to take feedback, and for the clarification as well. The very surprising email I’ve received this morning is very hazy on the details, and the docs linked from the email are not updated yet.

The main issue is that not charging for the control plane and charging for the control plane leads to two very different Kubernetes architectures, and as per your docs, those decisions made at the start are very much set in stone. You cannot change your cluster from a regional cluster to a single zone cluster for example. So you have customers who built their stacks taking into account your free control plane, and you’re turning the screws in by adding a cost for it — but they cannot change the type of their cluster to optimise their spend, since, per your docs, those decisions are set in stone. That’s entrapment.

You should keep existing clusters in the pricing model they’ve been built in, and apply this change for clusters created after today.

That said, many of us made a bet on GCP. For us in particular, we made a bet to the point that our SQL servers are on AWS, but we still switched to GCP for ‘better’ Kubernetes and for not nickel and diming, since AWS had a charge that looked like it was designed to convey that they’d much rather have you use their own stuff than Kubernetes. It is a relatively trivial amount, but it makes a world of difference in how it feels and you guys know more than anyone how much of these GCP vs AWS decisions are made based not on data sheets but for the ‘general feel’ for the lack of a better word.

AWS’ message is that they’re the staid, sort of old fashioned, but reliable business partner. GCP’s message, as of this morning, is stop using GCP.

Thank you <3. I apologize the email was hazy on details. I can't un-send it, but I'll work with the product teams to make sure they are crystal clear in the future. I'm interested to learn more about what you mean about outdated docs? The documentation I'm seeing appears to have been updated. Can you drop me a screenshot, maybe on Twitter (same username, DMs are open).

These changes won't take effect until June - customers won't start getting billed immediately. I'm sorry that you feel trapped, that's not our intention.

> You should keep existing clusters in the pricing model they’ve been built in, and apply this change for clusters created after today.

This is great feedback, but clusters should be treated like cattle, not pets. I'd love to learn more about why your clusters must be so static.

> This is great feedback, but clusters should be treated like cattle, not pets. I'd love to learn more about why your clusters must be so static.

What’s inside our clusters are indeed cattle, but the clusters themselves do carry a lot of config that is set via GCP UI for trivial things like firewall rules. Of course we could script it and automate, but your CLI tool also changes fast enough that it becomes an ongoing maintenance burden shifted from DevOps to engineers to track. In other words, it will likely incur downtime due to unforeseen small issues.

It’s also in you guys’ interest that we don’t do this and clusters are as static as possible right now, since if we are risking downtime and moving clusters, we’re definitely moving that cluster back to AWS.

Hmm - have you considered a tool like Terraform or Deployment manager for creating the clusters? In general, it's best practice to capture that configuration as code.

We use Skaffold and it’s great. I’m talking about very minor unforeseen stuff that causes outages, not that we do it manually.

> clusters should be treated like cattle, not pets

Heh... how many teams actually treat their clusters like cattle, though? Every time I advocate automation around cluster management, people start complaining that "you don't have to do that anymore, we have Kubernetes!"

Some people get it, yes, but even of that group, few have the political will/strength to make sure that automation is set up on the cluster level—especially to a point where you could migrate running production workloads between clusters without a potentially large outage / maintenance window.

> I'm sorry that you feel trapped, that's not our intention.

Please don't do this. You can apologize for your actions work to improve in the future , but you cannot apologize for how someone feels as a result of your actions.

Also, intent doesn't matter unless you plan to change your behavior to undo or mitigate the unintended result.

GKE can't offer financial backed SLOs without charging for the service. This is something that, I assume, significant customers want and that competitors already have:


Workers are not free and never were. So they were already charging.

Correct, but the control plane nodes _we're_ free and had no SLA. This changes that.

_were_ free. (Emphasis yours.)

I agree the rollout is a little bumpy but I'm curious what workloads you are using k8s for where a $74/mo (or $300/mo) bill isn't a rounding error in your capex?

Think about any medium sized dev agency managing 3x environments for 20x customers. That's 50k/year out of the blue.

My problem is that this fee doesn't look very "cloud" friendly. Sure the folks with big clusters won't even notice it, but others will sweat it.

The appeal of cloud is that costs increase as you go, and flat rates are typically there to add predictability (see BigQuery flat rate). This fee does the opposite.

$3600/year is significant for a startup on a shoestring budget.

If this cost bothers you a great deal, why not just deploy a new cluster?

> and the docs linked from the email are not updated yet.

That about sums up most things Google does for developers.

I thought the standard advice for Google stuff was "there are always two systems - the undocumented one, and the deprecated one"

Sorry for technical tangent but curious. Your decision making on GCP appears to appeal to best of breed + cost. But you put SQL Server on AWS? If you are saying SQL Server is better on AWS than on Azure it would be interesting to learn why.

We need MySQL 8 because of window functions, which GCP does not offer. That is available on AWS.

My bad. A clever marketing decision made me see the capital SQL as SQL Server since I am used to people saying Postgres, MySQL or SQL.

Hi Seth,

What about clusters that are used for lumpy work loads? Like data science pipelines? For example, our org has a few dozen clusters being used like that.

Each pipeline gets its own cluster instance as a way to enforce rough and ready isolation. Most of the times the clusters sit unused. To keep them alive we keep a small, cheap, preemptive node alive on the idle cluster. When a new batch of data comes in, we fire up kube jobs which then triggers GKE autoscaling that processes the workload.

This pricing change means we're looking at thousands of dollar more in billing per month. Without any tangible improvement in service. (The keepalive node hack only costs $5 a month per cluster.) We could consolidate the segmented cluster instances into a single cluster with separate namespaces, but that would also cost thousands in valuable developer time.

I don't know how common our use pattern is, but I think we would be a lot better served by a discounted management fee when the cluster is just being kept alive and not actually using any resources. At $0.01, maybe even $0.02, per hour we could justify it. But paying $0.10 to keep empty clusters alive is just egregious.

Those empty clusters that you get for free cost Google money. Perhaps it never should have been free, because that skewed incentives towards models like this.

We currently spin up dev clusters with a single node. $73/mo is going to basically double the cost of all of these..

Generally curious, isn’t Docker Kubernetes an option?

This highlights a sorta-weird consequence of this pricing change: suddenly pricing incentivizes you to use namespacing instead of clusters for separating environments.

(As a security person: ugh.)

That’s interesting - I think you’re right. We might move our staging cluster into our main production deployment.

More likely though, AWS or OpenShift running on bare metal on a beefy ATX tower in the office. We want to have production and staging as close to each other as possible, so this is an additional reason and a p0 flag on reducing the dependency on Google-specific bits of Kubernetes as much as possible, hopefully also useful for our exit strategy as well.

Kubespray works well for me for setting up a bare bones kubernetes cluster for the lab.

I'll use helm to install metallb for the load balancer, which you can then tie into whatever egress controller you like to use.

For persistent storage a simple NFS server is the bees knees. Works very well and a NFS provisioned is a helm install. Very nice, especially, over 10GbE. Do NOT dismiss NFSv4. It's actually very nice for this sort of thing. I just use a small separate Linux box with software raid on it for that.

If you want to have the cluster self-host storage or need high availability then GlusterFS works great, but it's more overhead to manage.

Then you just use normal helm install routines to install and setup logging, dashboards, and all that.

Openshift is going to be a lot better for people who want to do multi-tenant stuff in a corporate enterprise environment. Like you have different teams of people, each with their own realm of responsibility. Openshift's UI and general approach is pretty good about allowing groups to self-manage without impacting one another. The additional security is a double edged. Fantastic if you need it, but annoying barrier to entry for users if you don't.

As far as AWS goes... EKS recently lowered their cost from 20 cents per hour to 10 cents. So costs for the cluster is on par with what Google is charging.

Azure doesn't charge for cluster management (yet), IIRC.

(replying to freedomben): NFS has worked fairly well for persistent file storage that doesn't require high performance for reads/writes (e.g. good for media storage for a website with a CDN fronting a lot of traffic, good for some kinds of other data storage). It would be a terrible solution for things like database storage or other high-performance needs (clustering and separate PVs with high IOPS storage would be better here).

It's good to have multiple options if you want to host databases in the cluster.

For example you could use NFS for 90% of the storage needs for logging and sharing files between pods. Then use local storage, FCOE, or iSCSI-backed PVs for databases.

If you are doing bare hardware and your requirements for latency are not too stringent then not hosting databases in the cluster is also a good approach. Just used dedicated systems.

If you can get state out of the cluster then that makes things easier.

All of this depends on a huge number of other factors, of course.

> Have you used NFS for persistent storage in prod much?

I think NFS is heavily underrated. It's a good match for things like hosting VM images on a cluster and for Kubernetes.

In the past I really wanted to use things iSCSI for hosting VM images and such things, but I've found that NFS is actually a lot faster for a lot of things. There are complications to NFS, of course, but they haven't caused me problems.

I would be happy to use it in production, and have recommended it, but it's not unconditional. It depends on a number of different factors.

The only problem with NFS is how do you manage the actual NFS infrastructure? How much experience does your org have with NFS? Do you already have a existing file storage solution in production you can expand and use that with Kubernetes?

Like if your organization already has a lot of servers running ZFS, then that is a nice thing to leverage for NFS persistent storage. Since you already have expertise in-house it would be a mistake not to take advantage of it. I wouldn't recommend this approach for people not already doing it, though.

If you can afford some sort of enterprise-grade storage appliance that takes care of dedupe, checksums, failovers, and all that happy stuff, then that's great. Use that and it'll solve your problems. Especially if there is some sort of NFS provisoner that Kubernetes supports.

The only place were I would say it's a 'Hard No' is if you have some sort of high scalability requirements. Like if you wanted to start some web hosting company or needed to have hundreds of nodes in a cluster. In that case then distributed file systems is what you need... Self-hosted storage aka "Hyper Converged Infrastructure". The cost and overhead of managing these things is then relative small to the size of the cluster and what you are trying to do.

It's scary to me to have a cluster self-host storage because storage can use a huge amount of ram and cpu at the worst times. You can go from a happy low-resource cluster, then a node fails or other component takes a shit, and then while everything is recovering and checksum'ng (and lord knows what) the resource usage goes through the roof right during a critical time. The 'perfect storm' scenarios.

Have you used NFS for persistent storage in prod much? I know people do it, but numerous solutions architects have cautioned against it.

What? Shouldn't you try to make the creation and deletion of your staging cluster cheap instead of moving it to somewhere else?

And if that is your central infrastructure, shouldn't it be worth the money?

I do get the issue with having cheap and beefy hardware somewhere else, i do that as well, but only for private. My hourly salary spending or wasting time on stuff like that costs the company more than just paying for an additional cluster with the same settings but perhaps with much less Nodes.

If more than one person is using it, the multiplication effects for suddenly unproductive people, is much higher. Also that decreases the per head cost.

I suspect I'm in the minority on this, but I would love for k8s to have hierarchical namespaces. As much as they add complexity, there are a lot of cases where they're just reifying complexity that's already there, like when deployments are namespaced by environment (e.g. "dev-{service}", "prod-{service}", etc.) and so the hierarchy is already present but flattened into an inaccessible string representation. There are other solutions to this, but they all seem to extract their cost in terms of more manual fleet management.

Hey - I'm a member of the multitenancy working group (wg-multitenancy). We're working on a project called the Hierarchical Namespace Controller (aka HNC - read about it at http://bit.ly/38YYhE0). This tries to add some hierarchical behaviour to K8s without actually modifying k/k, which means we're still forced to have unique names for all namespaces in a cluster - e.g., you still need dev-service and prod-service. But it does add a consistent way to talk about hierarchy, some nice integrations and builtin behaviours.

Do you want to mention anything more about what you're hoping to get out of hierarchy? Is it just a management tool, is it for access control, metering/observability, etc...?

Thanks, A

You can dedicated nodes by namespace, at which point the isolation is pretty strong.

* Assuming you also configure strong RBAC, network isolation and don't let persistent volumes cross-talk

As also a security person (:wave:), you can use dedicated node pools and workload identity to isolate workloads in the same cluster.

Workload identity is a GCP-specific beta feature for mapping to GCP IAM, right?


Assuming you can do that, and your system is not using namespacing for its own purposes.

Its still billed by the minute. If you run your dev clusters all the time 24x7 then they apparently are critical enough.

For a dev environment, why not host your own hardware? Especially if cost is a concern, it seems like a no brainer.

> There's also one completely free zonal cluster for hobbyist projects.


The 'cluster control plane is free' selling point was basically the _only_ thing I saw from all the different groups I worked with which was in GKE's favor. Yes you can get one free cluster but anyone serious about using Kubernetes would have _at least_ two clusters (a prod and non-prod staging cluster), so unless you're a true hobbyist (and the use case for K8s in that realm is pretty slim unless it's to backstop work projects) this effectively means you're going to pay as much for GKE cluster control planes than you do for EKS.

Can you help me understand how these changes would be _more_ than EKS?

Sorry, misread the original post, it would be the same.

Too many people drank the cloud kool-aid. The move from day one was to create provider agnostic cloud architectures and repent the use of provider-specific services.

That said they do make it damn hard. Our k8s cluster is as basic as it comes, no databases, simple deployments, but we do still have a dependency on Google Cloud Loadbalancer (which we hate).

If pricing goes up too much from this we'll move, but the GCL dependency will be a PITA :/

We’re in the same situation — we’ve engineered for minimum provider-specific dependencies but GKE LoadBalancers were where they got us via arm twisting as well. There is no way to expose a cluster to the outside world in a production environment otherwise.

It's kind of ridiculous internal load balancers can't get automatic certs. We've had to do a stupid dance just to get certs via the LE DNS challenge out of band, and then regularly install them on internal LBs.

We paid for long-lasting wildcard certs because of that. Which Apple killed a few days ago. It’s going to be fun when they are close to expiry.

Same! I still manually provision some certificates just because LEGO/etc. just don't work with GCP + Google Cloud Load balancer! And the docs for the entire subject are useless..

There are ways to expose your cluster to public and/or run your own load balancers on GKE (or any other cloud k8s deployment).

Do you also have occasional outages because the load balancer gets into a confused state and changes take 10+ minutes to propagate with no re-course other then than to destroy and re-create the entire resource?

How about managing own k8s running on VMs / bare-metal?

Pretty much anyone who works in ops longer understood from the go that its impossible to be totally provider-agnostic. K8S is just a nice api on top of provider api that still requires provider specific configuration.

From what I've seen looks like managing k8s on your own often ends up requiring a dedicated team to keep with their insane release cycle.

Can confirm. Depending on your cluster size you will need at least 2 dedicated people on the "Kubernetes" team. You'll probably also end up rolling-your-own deployment tools because K8s API is a little overwhelming for most devs.

The big problem with running your own cluster is the extra machines you need for a high-availability control plane, which is expensive. That is why Amazon and now Google feel like they can charge for this; you can't really do it any better yourself.

Disclaimer: I work for Red Hat and am very biased, but this is my own honest opinion.

If you're going to run on bare-metal or in your own VMs, OpenShift is very much worth a look. There are hundreds, maybe thousands of ways to shoot yourself in the foot, and OpenShift puts up guard rails for you (which you can bypass if you want to). OpenShift 4 runs on top of RHCOS which makes node management much simpler, and allows you to scale nodes quickly and easily. Works on bare metal or in the cloud (or both, but make sure you have super low latency between data centers if you are going to do that). It's also pretty valuable to be able to call Red Hat support if something goes wrong. (I still shake my head over the number of days I spent debugging arcane networking issues on EKS before moving to OpenShift, which would have paid for a year or more of support just by itself).

Doing load balance, DNS, and egress has been way uglier in the Google Cloud K8s than I expected. Pushes projects towards doing it themselves in cluster IMO.

GCP is actually more the 3rd inferior option, behind Azure. Gartner lists Azure as just behind AWS for IaaS providers, and GCP a more distant 3rd:


Azure AKS is pretty terrible TBH in comparison to GKE.

Also lack of SLA / shady SLA does not help.

Ps. Talking as someone with hands on experience.

Ps2. Azure support is terrible and their response times are constantly breaking SLA..

Counterpoint to your PS2 - I've used Azure support at both my enterprise job and my microISV - each time the responses have been quite quick, and each time they have been helpful.

Honestly, I've been pleasantly surprised.

Gcp is not gke. In my opinion the gke offering fromg gcp is the best right now.

Anecdotal, I know, but for prospective Latacora customers this is absolutely not reflected in market share. It's AWS first, GCP second, Azure very distant third. I'd happily believe Azure is dominating in some segments where MS showers prospective customers with millions of dollars in credit, but IMO a blind person can see Azure does not have the product offering to warrant a "completeness of vision" that is right on the heels of AWS.

According to Canalys, GCP is at 6% cloud market share (in dollars), Azure at 17.4% and AWS at a bit over 32%.


AliCloud and Rackspace are very close to GCP as well.

That being said, if you're planning on running Kubernetes, I'd choose GCP over any other offering - the tooling and support just seems better, in my entirely subjective opinion.

What essential product offerings is Azure missing that AWS have?

My experience is that once AWS offers a new service that gets attention, a few month later also Azure offers it - and vice versa.

Anecdotally and in my opinion, Azure is more complete than GCP. Between stuff like this and their product dropping stigma, most of my customers (in the cloud consulting space) are trying to get into Azure. This is across every industry we work in (retail especially). I've come across 2 customers in 3 years of consulting that want anything to do with GCP.

> Azure is more complete than GCP

It has more features, yes. How well those features work is another matter entirely.

Interesting. As a mid level GCP customer, it won't make a big dent on our bill specifically, but in the end, I'm not sure this pricing move is a smart strategy.

With this fixed fee model, the change will barely make a difference (== Google revenue) for the large customers who can spare the money, but will create a significant entry barrier to that side project / super-early stage that considers getting hooked on GCP, specifically GKE.

Then again, not my decision to make.

Thats for me the most frustrating thing with GCP, AWS and Azure. I would never use them as a very early small 3 people startup or for private reasons.

There is no billing protection (which could make you very poor very fast) and every service has a certain cost and quality which is just not feasable in the beginning.

Even GKE with its free kubernetes master does block a lot of resources on the nodes: https://cloud.google.com/kubernetes-engine/docs/concepts/clu...

Also a ton of great features on gke you will probably never use if you are too small. It is so much cheaper to just get cheap hardware somewhere and put your own k8s onto it if you have more time then money.

Even on Digital Ocean you have the load balancer problem: you need to use the provided and also 'costly' LoadBalancer service. There is only one hacky way to prevent it by exposing your ingress on the host and mapping that one ip but then you loose all the self healing stuff and loadbalancing capability.

My approach with any Google b2b product - always have a plan to migrate out of Google and never agree to anything that locks you to Google.

After seeing what they did to Google Maps and Api.AI / Dialogueflow jumped from free to 5k$ overnight - just can't trust them.

There is a whole generation of future CTO / VP of Engineering types who are coming up on these reputations for GCP, AWS, Azure, etc. and it'll be interesting to see how the biases play out over the next 5-10 years. I predict a strong move back to self-hosting once the pains of i.e. self-managing a bare metal K8s cluster come down, as well as storage/ram/cpu prices continuing to drop. I for one welcome it.

There is a billion dollar company on the horizon for whoever can best commoditize bare metal with an apple-esque usability model.

> never agree to anything that locks you to Google.

How is Google different from other cloud poviders in terms of vendor lock-in?

Fair point - yes, I kind of try to avoid it for any provider. But special thing about Google is they are raising prices or worse cancelling or modifying services at will.

For AWS or Azure I developed way more trust over time - could be subjective - but also could be that there is a reason Google is distant 3rd in the game.

Being locked into a provider that does not increase prices or modify services in a non-compatible way ( cancel etc ) works much better than being locked into a provider that does.

For comparison, running a HA master node in London on n1-standard-1 will set you back ~93 dollars per month. On top of that obviously you'd be figuring out how Kubernetes works, what the best configuration is among other things. I don't agree with the blatant bait and switch, but it still works out way better.

Dumb plan, underdogs don't have billion dollar connections. Worse plan because Google has a reputation for bad service.

For what it's worth, we're seeing a GKE being the main reason prospective clients use GCP at Latacora, to a point where I'd say I was surprised if someone was on GCP but _not_ using GKE. Obviously that's a small subset of all companies, but GKE does seem like the goose that lays the golden eggs for them, at least insofar they care about startup market share.

I also think of them as the second, inferior cloud, but they're almost certainly the better k8s hoster. If you're serious about running k8s on AWS, there's a good chance you're doing something like CloudPosse's Terraform-based setup, not EKS.

> their position in the market as the second, inferior choice Who's the superior choice? EKS?

It's 72$ not 300 and the first is free.

I'm not sure what your usecase is that you would choose gke and you are worried about 300$ per month infra costs.

For Corp we use gke. For private I use selfhostet k3samd for our startup asuper cheap digital ocean cluster.

> worried about 300$ per month infra costs

That feels like the wrong attitude.

The salary of people working and using those 'tools' this infrastructure is higher then 300$.

If your kubernetes cluster is part of your core infrastructure, then 300$ more or less should not be an issue at all (not to say that i think 300$ is nothing).

That should not mean that you should waste money but often enough, if you buy cheap and your hardware breaks and your time&material costs much more then what a better hardware would have cost, then you wasted money by buying cheap.

Unfortunate with IT products, there are certain things which are not directly visible: Like how secure is your product. GCP offers 2FA, Digital Ocean does not. How much money is it worth to you to have your whole infrastructure protected by 2FA? For me in a business context, non 2FA would be a no go.

It's considerably cheaper than EKS. Looks like $75-80 a month vs I believe around $200 per EKS cluster.

Everyone's having EKS cost problems while I'm just sitting over here paying nothing for ECS control planes.

I am not sure how you get to $200/month for an EKS cluster - it is $0.1/hr


It's the same price as EKS... https://aws.amazon.com/eks/pricing/

I know the HN crowd hates this sort of thing, but seems reasonable to me. $70/month is cheap for a business, and most users likely have well under 10 clusters, probably 2-3. This is probably mostly to cut down on edge cases users who are spinning up crazy numbers of clusters for weird reasons, and costing Google a bunch of $$$.

> Last modified: November 27, 2018 | Previous Versions

> As of November 28, 2017, Google Kubernetes Engine no longer charges a flat fee per hour per cluster for cluster management, regardless of cluster size, as provided at https://cloud.google.com/kubernetes-engine/pricing. Accordingly, Google no longer offers a financially-backed service level agreement for the Google Kubernetes Engine service. The service availability of nodes in a Google Kubernetes Engine-managed cluster is covered by the Google Compute Engine SLA at https://cloud.google.com/compute/sla.

> Uptime for Google Kubernetes Engine is nevertheless highly important to Google, and Google has an internal goal to keep the monthly uptime percentage at 99.5% for the Kubernetes API server for zonal clusters and 99.95% for regional clusters regardless of the applicability of a financially-backed service level agreement.


Interesting that they're now walking back from this...

Google, walking back their commitment to software? gasp

Time to dump GCP then. It's not even that the fee is that large, but rather that this is once again Google failing on a long term commitment and shafting those on their platform once again. This was one of the benefits that was pushed by their sales team when they called us up to market GCP over AWS and their EKS offering. Doesn't matter that they are price matching, Google's inability to actually commit to long term support, servicing, pricing or features across any of their products is tiresome. Time to move business elsewhere to AWS or Azure. They may be more expensive, but at least we know what we are paying for, and that it's going to stay that way for a significant length of time.

If the main value of GKE over DIY is $73, you should totally DIY.

I mostly try not to be too Google-focused here, but I have to say...

I'm pretty proud of GKE, and I think it offers a lot of value other than just being cheap. Managing clusters is not always easy. GKE handles all of that for you - including integrations, qualifications, upgrades, and patching clusters transparently BEFORE public security disclosures happen.

We have a large team of people who deal with making GKE the industry-leading Kubernetes experience that it is. They are on-call and active in every stage of the GKE product lifecycle, adding value that you maybe can't see every day, but I promise you is there. When things go sideways, there isn't a better team on the planet to field the tickets.

I don't understand the anger here - you're literally saying you'd rather pay more for a service of lower quality because... why? Because they will continue to charge you more? Does not compute.

For those people who use a large numbers of small clusters, I understand this may make you reconsider how you operate. As a Kubernetes maintainer, I WANT to say that a smaller number of larger clusters is generally a better answer. I know it's not always true, but I want to help make it true. GKE goes beyond pure k8s here, too. Things like NodePools and sandboxes give you even more robust control

GKE is the best managed Kubernetes you can get. And we're always making it better. Those clusters actually DO have overhead for Google, and as we make GKE better, that overhead tends to go up. As someone NOT involved in this decision, it seems reasonable to me that things which are genuinely valuable have a price.

Also, keep in mind that a single (zonal) cluster is free, which covers a notable fraction of people using GKE.

I believe everything that you say. The value it provides is very good.

If Google Cloud would have charged 73$ from the start (or after beta), i think there wouldn't be so much anger.

The anger comes from, a product was free and now it is not. A lot of people made architectural choices that depended on the price of 0. (You mentioned these cases in your post).

However, i believe the bigger issue is, that Google Cloud broke essentially a promise.

As a customer I need to be able to trust my cloud provider, because I am literally helpless without it.

Can I trust an entity that breaks promises ? No, I can't. I need to worry. Especially, if I cannot follow the reasoning behind it.

If it is true, that Google's overhead went up, because of improvements, then it would have better to have two kinds of clusters (better and paid, old-school and free). You would have not broken the promise. People can choose on their own pace to upgrade if they need to.

Also keep in mind, that you also carry the Google brand. Hence, if other teams of Google break promises (like f.e. Stadia) this will also reflect on the Google Cloud team. Unless you keep a crystal clear track record, i need to assume it can get worse than what you have done right now.

My conclusion is that, I will design the cloud architecture I am responsible for, such that it has minimal dependencies on Google Cloud specifics.

Just a data point:

I'm the CTO at a very small company. All our stuff is running on GKE. Our monthly bill tends is less than $10,000/mo. We're currently in the process of splitting our stack into separate projects and clusters, because co-locating projects in a single cluster has gotten messy. We'll probably end up with 4-5. That will increase our bill by $292/mo, worst case, assuming the first cluster is free. For a company our size, it's not a huge expense. But these things add up.

Since moving from DigitalOcean, our Google Cloud setup has more than doubled our monthly bill. We're paying for more compute, but certainly not twice the amount, as we've only gone from 14-15 nodes to around 20; it's just more expensive across the board, both node cost and ingress/egress. We're even cost-cutting by using self-hosted services instead of Google's; for example, we use Postgres instead of Cloud SQL. I ran the numbers earlier today; the equivalent on Cloud SQL would be 3.4 times more expensive.

In short, Google Cloud is expensive, and it's not like the bill is getting smaller over time.

Developments like these factor in my choice of cloud provider for future projects.

> I don't understand the anger here - you're literally saying you'd rather pay more for a service of lower quality because... why? Because they will continue to charge you more? Does not compute.

This response, right here, is everything you need to understand about why Google Cloud is failing to sell to the enterprise market.

The enterprise market only really cares about one thing: rock solid stability. It doesn't care about features, and it doesn't (really) care about price. It wants a product that it can forget is there.

What's really sad is, technically, GKE is that product. It just works. It is solid. You do get to forget that it's there. Until you get a random email telling you that you get to explain to your boss that your bill is going up next month and your project might end up running over budget as a result.

If you can understand why a large segment of the market prefers to pay a higher but stable charge over a lower but undependable charge, then you can understand why Google Cloud is failing at selling to enterprise.

>If the main value of GKE over DIY is $73, you should totally DIY.

It's not the fee itself, it's the worry that GKE will do what Google Maps did and massively increase fees with very little notice, causing people to scramble to migrate.

Google has a really bad reputation right now when it comes to cancelling projects that people have built their businesses upon, or jacking up fees quickly. The $73 is irrelevant on its own - the issue is (a lack of) customer trust.

Google and Google Cloud are largely different businesses, though I understand it's hard to keep that in mind in the context of things like this.

I encourage everyone to always stay nimble and keep your eyes on portability. I also encourage you to try to assess the REAL costs of doing things yourself. It's rarely as cheap as you think it is.

As a Kubernetes maintainer, I am fanatical about portability.

As part of the GKE team, I think we provide tremendous value that people tend to under-estimate.

NOTE: I was NOT involved in this decision, but I understand it, and I want to help other people understand it.

sorry that you have to work with such low-IQ people that made this decision. Followed your work on k8s, thank you, its better because of people like you and others.

I don't think that is at all a fair characterization, you just don't have the same data available to you.

Thanks for the props. It means a lot to me personally.

> As a Kubernetes maintainer, I WANT to say that a smaller number of larger clusters is generally a better answer.

I'm not trying to nitpick here, but that justification is awful. It goes against reliability engineering on a deeply fundamental level, pretty much guarantees to make already not that reliable things even less reliable. Generally the more isolated entities you have and the smaller they are the less they affect each other and the environment when something bad happens, the faster they can be recovered, the fewer end users they affect, etc. If I remember correctly, this is even how some Kubernetes people justified ideas behind Kubernetes itself that you want to drop now.

That is not absolute truth. If it were you would eschew kubernetes altogether and just use VMs.

Everything is a tradeoff. If you want total isolation, you pay for it. If you don't want to pay for it, you make more value-based tradeoffs.

Concretely, Google runs "a handful" of "pretty reliable" services on a relatively small number of clusters.

What? A 73$ price difference to AWS was their main selling point?

It depends entirely on how a firm has their infrastructure set up - if you have small cluster(s) per client for isolation/compliance purposes, you end up with, for example with 250 clients, each one using, say, 1000 billable hours/month:

250 * 0.1 * 1000 = $25,000/month.

This is quite the price hike for something that was a) free until now. b) Not a service that warrants such a fee given that it uses existing (GCE) resources and can frankly be done manually by one of the DevOps engineers for a few hours/month and some scripting. It's just a charge for convenience it seems.

You keep noting how "easy" it is to provision and manage a Kubernetes cluster. From experience, properly securing and maintaining a Kubernetes cluster is a multi-person full-time job.

We already do provision clusters, using the aforementioned tools. There is some setup involved, but once done, provisioning and upgrade is relatively simple. Indeed, we used to exclusively provision and upgrade via Terraform/Ansible. When we started using GCP, any data that could be stored by a US company without causing compliance issues was offloaded to GCP over other providers due to the auto provisioning/management at no cost.

If you guys find it hard to maintain/upgrade clusters, that's your business. All I am saying is that as a company giving you business, with this change, you are now no longer the cheapest, most reliable or most convenient. As a result, we will be moving to provision instances with other providers from now on.

Thank you for the feedback.

> Google's inability to actually commit to long term support...

This is _exactly_ what Google is doing in this case. We are providing an SLA - a legal agreement of availability and support. These changes introduce a guaranteed availability of the management control plane.

He means that there was a sales pitch from all gcp sales guys to not charge for that. 99.95% is not enough IMO to charge 73$/mo.

As someone else noted, it breaks a lot of recommended architectures where you would have auto provisioning and a lot of clusters to separate concerns and keep costs down.

Finally, the pricing changes are starting to look like a pattern, every time Google deems the usage of a product is good enough, they will increase the price.

They are the Ryanair of the cloud.

Edit 1: moreover, it will increase the cost of composer, and on top of that, the recommended pattern where composer is paired with a kubernetes cluster for executing the workloads

> They are the Ryanair of the cloud.

Isn't Ryanair literally the Ryanair of the cloud(s)?

EKS only gives you 99.9% uptime, and I'm uncertain as to whether you could achieve more than 99.9% uptime on your own by DIYing your cluster in a public cloud provider without doing multi-region.

Shouldn't that be opt-in? The management control plane is not something we consider critical to operations. I'd happily accept if it was unavailable for 1 and a half minutes a day versus these additional costs.

That's great feedback. I'll relay that to the product team. IANAL, but I think it would be legally challenging.

IANAL either, but I don't see why it would be? Just have a separate cluster type, e.g SLA Zonal, SLA Regional. The SLA already differentiates the current cluster types. Anthos Clusters are also not subject to any additional fees?

And having it opt-in will save face with those users of GKE where an additional $73/m is significant.

Opt-in for the SLA and additional cluster cost would be fantastic. We run pretty small clusters but don't need any additional SLA's on top of what's already provided. Frankly we could care less about the control plane SLA.

Hard to understand how it would be legally challenging. ISP's do it all the time when differentiating their business plans from residential. Both services run over the same infrastructure and you typically get the same/similar speeds, but a key difference is an SLA with the business plan.

Sure, in this case I can see that. I was referring to those four points with respect to Google services in general. I'm sure I don't need to dig up a list of features and services that have been merged, shuttered, price hiked or moved into a different product suite over the years. Admittedly a lot of the issues are with the GSuite side of things, but it's sad to see this coming to GCP as well.

On a hopefully more constructive note, if this is the way it's going to be from now on, I would at least expect to see an exemption on such a management fee/SLA on preemptible nodes - having an SLA and management fee on the cluster whereby nodes can be killed in a 30 second window without prior warning seems to be a little more than pointless.

Even if your worker nodes are pre-emptible, the master nodes are not. The management fee covers running those master nodes and many other core GCP integrations (like Stackdriver logging and other advanced functionality). Billing is computed on a per-second basis for each cluster. The total amount will be rounded to the nearest penny at the end of the month.

> We are providing an SLA - a legal agreement of availability and support.

Do I still have to pay the bill first, fill out forms, get account managers involved, at some point receive a partial credit, and repeat this until the delta between what I was expected as the SLA credit and what I got as the SLA credit is less than the cost of the time to fight for another cycle?

Never trust free without an escape hatch.

Oh wow, one of the biggest reasons we picked Google Cloud was that you did not have to pay a flat fee for their managed Kubernetes service. Luckily there is Kubernetes support across all Cloud Providers so we're happy we're not vendor locked in. (biggest reason we picked Kubernetes in the first place.)

We were thinking of using Stackdriver for logging, but we were scared of vendor locked in due to price increases or other changes that we've been warned about with Google. In this case, I think it's safe to say we'll be using Prometheus + Grafana + Loki instead since there may be a random Stackdriver flat fee introduced or some other weird fee and we may need to migrate out of Google.

The last place I worked, with a couple of petabytes of monthly Stackdriver logs and a full embrace of almost every GKE/GCP tool, also switched to Prometheus + Grafana due to a lack of functionality within Stackdriver. I think you're making a good choice.

We moved our cluster into the Hetzner Cloud for Emvi [1]. Yes, they don't offer a managed solution and it took us about two weeks to set up and test the cluster properly. But the cost is less than 1/6 of what gcloud costs us. If you have the resources and knowledge to maintain your own cluster, check it out. They have insanely good pricing (2,96 €/month for the cheapest instance which is faster than Googles $15/month VM).

Here is a very good tutorial on how to set up your own cluster: https://community.hetzner.com/tutorials/install-kubernetes-c...

[1] https://emvi.com/

Hetzner is solid from a perf / price perspective. Mind that the network peering outside Europe is not that great. So could be a very good choice depending on your user base' location.

True. In our experience it's fast enough. Additionally traffic is free unless you hit the 20 TB outgoing traffic per node (which probably will never happen to us). In contrast gcloud costs about 12ct per GB outgoing traffic (!!!).

After this, I've been exploring other places to host our team's clusters... copied pricing below

- EKS: $0.10/hour/cluster

- Digital Ocean: Free (only charges for the nodes)

- Azure: Free (only charges for the nodes)

In the long run we'll probably try and build our stack on vendor-agnostic tools..

- Rancher - https://rancher.com/products/rancher/

- Infra.app - https://infra.app (mentioned a few weeks back on the Kubernetes podcast)

- Prometheus https://prometheus.io/ - metrics

The cloud providers all include their own tooling (logging, monitoring) built-in but I'm worried this will only lock us on to further price increases.. has anyone found a good vendor-neutral logging system? We don't really want to use ELK stack right now since it's really heavy and costly to run...

One more shameless plug for my startup, https://kubesail.com (YCS19)!

Honestly I understand the hard work it takes to manage all the clusters, but this was a total bait and switch and hurts the reputation that everyone has with Google Cloud. Telling us to DIY because we cannot pay $71 just sounds like someone who works at Google would say, which you do work at Google.

The sentiment with my clients before was that Google Cloud was a great choice because of the security and expertise with GKE. It's also free!

Meanwhile, in the back of my head I've always had this fear because of your reputation that you do not keep your promises and that you do not care about your users. Because of this fear, we have tried to make every infrastructure decision not use a managed service by Google even though it may be easier to do so short-term.

For the product I'm working on, we decided to use Kubernetes just in case you baited and switched us with the reputation you have. In terms of monitoring, we really wanted to use Stackdriver, but now we're 100% using fluent-bit + prometheus + loki + grafana. It's the only way to protect ourselves from your reputation which is becoming a reality.

So yeah, this is pretty sad and a bad decision. Should have priced GKE at $70 / month to begin with and we would have been fine with it. Now we're (actually) looking at EKS since Amazon doesn't seem to have this reputation and you've spooked us. We never would have thought about using any other provider until today.

I understand the emotional response here, but I don't think it's rational. GKE has to work as a business, or else the whole thing is in trouble.

I think GKE provides tons of value, but people tend to under-estmate that. In order to keep providing that value, we need to make sure it is sustainable.

I'm really, truly sad that you perceive it as bait-and-switch, but I disagree with that characterization. If you want to move off GKE, I'll go out of my way to help you, but I urge you to take a big-picture look at the TCO.

I think part of the optic's issues is your peers seem to be offering similar services for free, while being sustainable.

EKS has always had a fee.

AKS, well, I don't have any insight into their business, but I have my suspicions.

OVH also offer a free control plane, but their service is relatively beta so far.

Shameless plug: https://www.kubermatic.io Disclaimer: I work at Loodse (the company behind kubermatic)

Time to start looking into DigitalOcean more seriously.

G Cloud is already unreasonably expensive and nearly impossible to price manage. It's cool to see them double-down on that.

DO is famous for being a pain in the ass. They’re great for tiny hobby things but honestly I’d never run a prod/serious/client workload there. Too many issues. I’ve had multiple clients lose a droplet due to a simple credit card expiration.

It’s a race to the bottom on price so this doesn’t surprise me. They chose this life.

Why would you use the cloud but have that single point of failure? Billing is also a network activity. Why not have the backup infrastructure linked to another credit card?

To be fair, what is the best way to handle expiring credit cards? For one of my SaaS products, I give a 30 day grace period, then delete the data. If they didn't have a backup, that's on them...

If they delete the droplet the second a single CC payment fails, that's one thing, but I don't believe that's how their system works.

If you are literally in the business of enabling, storing, and protecting production workloads, data, etc.. then catastrophic data loss should be an asbolute last resort.

In both of these instances I am referring to a balance of less than $20.

So for less than $20 (a few weeks late) DO says, welp fuck this customer we are going to terminate all of their resources immediately.

This is what DO and others need to do: Put it in your terms that you will keep racking up charges and then send it to collections. Charge interest, charge fees, do whatever you want. Turn $20 into $40. Why? Because businesses do not give a shit... if it is between losing everything or a slap on the wrist (monetary fee) they will chose the latter every time.

One of my clients had to painstakingly trudge through archive.net to recreate their missing blog posts. How fucking miserable is that? Over a few hundred megabytes of disk that DO could have kept around...

Also, actally make an effort to reach out before doing anything serious. Call phone numbers, email other members on the team to alert them to the issue, etc...

Too many times I have seen some script kiddie throw together a client's WP site and toss it on DO because it is 'so cheap and cool' and yet they forget about everything else: backups, security, managing the box, etc... and inevitably shit will hit the fan.

I was really rootin' for DO in the beginning. I even applied to work there when they were first starting out but did not want to relo to NY. Now I am moving three clients OFF of DO because they are all very unhappy with the level (or lack) of service they've received.

Or...if it was important enough to you that losing it hurts, then maybe pay attention to your emails and don't let things expire and pay your shit on time. And of course, a sane person would backup anything important.

Yep, and that is something I have since instituted since taking the reigns. Still... DO could turn this lemon of a situation into lemonade by increasing revenue and preventing unnecessary headache for their customers.

I think the saying "you get what you pay for" would apply in this case. People want to not pay for things, they don't get the things.

I keep backups of my cloud data whenever possible. Mostly a couple hundred mb for small projects, I have been bitten by the same situation in the past

Except no data wasn actually lost in that case? They got full access back to their account, and additionally, it wasn't a technical issue - they were accidentally flagged as a fraudulent / abusive account.

I linked two incidents. The first one required a trending HN post to get resolved. The second, the developer never got their data back.

If you know how these storage services worked under the hood you would understand that durability is not guaranteed. It is the responsibility of the customer to ensure their data is backed up.


For last 7 years I am running DO and never had any issue. I never understand why it is looked down. In fact I have faced so many issues with AWS (particulary their old hardward). In one case, our ec2 instance was rebooting frequently. AWS team didn't accept any issue from there end and after few weeks ask us to upgrade instace because of bad health.

In my experience, AWS is a very expensive cloud with clunky UI and big brand name. During consulting gigs, I have seen many customers want to go with AWS only because of brand. And later they cry when bills start to hit roof with vendor lock in.

I recently tried to spin up a VM for my own use in AWS, but I had to do a rate increase because I wanted a beefier machine. Easy peasy. My experience was comically bad.

====================== First email from AWS (several days after my request): ======================

Thank you for submitting your Limi Increase request.

I'm contacting your to inform you that we've received your Workspaces Application Manager - Total Products limit increase request, for a max of 5 in the Oregon region. I will be more than happy to submit this request on your behalf.

Please note that for a limit increase of this type, I will need to collaborate with our Service team to get approval. This process can take some time as the Service team must review your request first in order to proceed with the approval. This is to ensure that we can meet your needs while keeping existing infrastructure safe.

You may rest assured I will push towards expediting your request to be addressed as soon as possible. As soon as the Service team contacts me I will definitely let you know by email.

In the meantime, please feel free to let me know if you have any additional questions or concerns and I'll be happy to help!

I appreciate your patience while we evaluate your request.

====================== Second email: ======================

Thank you for your kind patience whiIe we continue to evaluate your Workspaces Application Manager - Total Products limit increase request.

I apologize for the time is taking to provide you with a resolution as we've always aimed to provide our customers with a rewarding experience that meets and goes beyond expectations. Unfortunately, from time to time there are cases where the final outcome is handled by another department and the time they take is completely out of our hands.

We certainly understand the sense of urgency that you have for this particular request and therefore, we have spent time communicating with the service team to let them know about it. Rest assured that your case is active, being looked into and the sense of priority has been transferred. As soon as we have an update from their end we'll be touching base with you immediately.

I am committed ensuring that you will get the help that you need as fast as possible, so we can ensure everything is being handled to your satisfaction, please feel free to let us know if you have any further questions or concerns through this case, so we can address them as soon as possible.

============= My response: =============

You can go ahead and cancel my request -- I've decided to not go forward with my project.

============= Their reply: =============

Greetings from Amazon Web Services.

We're sorry. You've written to an address that cannot accept incoming e-mail.

If you need to contact us, please visit http://www.aws.amazon.com/contact-us .

Thank you for your business.

I always suspect, do they still copy configuration manually ? Is this delay because of that ? There was article where in early days AWS was doing it. Even amazon.com was not running on AWS those days. Hope it is not the case.

DO minimum for a cluster is $20/mo. That sure beats $72 although gcloud is offering the first zonal cluster for free. It might still be cheaper to use gloud for small things. I do really like DO though, have a few personal projects hosted there.

Look at Vultr too. My last 2 support tickets were responded to within 2 minutes and solved within 10 minutes. Their support has always been good, but unlike almost all other companies, it seems to get better as they grow.

I run a Kubernetes cluster on Vultr and I haven't had any problems.

Outside of just hating Microsoft...why not Azure?

DO is for garage projects. If you need anything serious it's AWS / GCP or MS.

It's not so much about an additional fee (honestly, 0.10$ per hour are nothing) it's more about Googles practice of suddenly charging services, shutting down services as they like, not giving a shit about us customers. Azure and AWS are much more customer friendly here.

I run a hobby project in multiple zones because low latency is important, and Kubernetes makes it easy to do so.

There's no way I'm going to pay an extra $73/mo -- I already pay for the computing resources, this should be free.

Looks like I'll be moving away from GKE. It's a shame, I _was_ a big advocate.

We offer one free zonal cluster, which is specifically designed for your use case :)

I have personal experience of a few companies in the UK where GCP are offering 90+% discounts to onboard. GCP are spending hundreds of millions to do this. K8S control-planes are a rounding error compared to this.

You could have grandfathered in current deployments but - nope. In the tech world this is up there with killing Google Reader.

I use more than one zone because I run TCP services which need low latency. I'll probably just switch to Digital Ocean.

There actually was a 15/cent/hr fee a few years back:


YMMV, but I think the value prop of using a managed GKE cluster vs the raw costs and engineering time to run your own control plane is still strong.

> One zonal cluster per billing account is free

For hobby projects nothing will change.

"Let's trade goodwill for short term profits." I suppose not really surprising given the maps fiasco and Oracle appointment but this really comes off poorly to me.

Flat fee is gonna suck for people running a lot of clusters. I bet there are some people out there spinning up a cluster per x who are going to be real unhappy about this.

I don't like the direction this is heading, it seems like the SLA and the accompanying charge could easily have been optional.

We have a couple of dozen clusters, two per client, and can't change the architecture. We use helm and terraform and can build new clusters quickly but we can't treat them entirely like cattle because we don't own all the DNS. Our clients are not the sort to do things quickly - or even slowly.

Does anybody have any good and up to date resources comparing the current options for K8 providers? I'd like to get a feel for what it would take to switch.

Hey everyone - Seth from Google here. Please let us know if you have any questions! You can learn more about the pricing changes at https://cloud.google.com/kubernetes-engine/pricing.

Hey Seth,

Thanks for being the recipient of everyone's (justifiable) frustrations. They probably don't pay you enough.

I think, what is especially frustrating about this, is that we do already pay for resources that are provisioned by our K8S clusters. We pay for the network traffic, the storage, the compute. I saw you mention StackDriver... we pay for that as well.

I can appreciate that actually setting up and managing GKE backplanes is a non-trivial expense, but I generally assumed that that cost was amortized out, just like I don't pay for the backplane that runs GCE and the rest of GCP's service suite.

I also appreciate that you mention some customers are perhaps taking advantage of this "free" resource. But, isn't that quotas are for?

Frankly, more concerning than the fact that now I have a new $73/mo. fee attached to my account (which, is not the end of the world) is that this really comes out of left field, and in the context of concerns about the the nature of GCP's new leadership, and reports of Google leadership debating GCP as a going concern. I realize a lot of that isn't well founded, but it's surprises like this one that keep that narrative alive. AWS ain't no saint, but they are pretty consistently who they are: not full of bad surprises.

This just leaves a bad taste in the mouth, and makes me wonder if I can expect other surprising cost increases, or perhaps, if these don't "work", worse surprises like deprecation notices. Is this the precursor to you all discontinuing GKE because, as the DevRel class likes to tweet, nobody should be using Kubernetes if they can use (more expensive) services like Cloud Run?

Are we about to get Oracled?

> They probably don't pay you enough.

Can confirm :)

> ...we do already pay for resources that are provisioned by our K8S clusters

Customers are charged for worker nodes, but until this point, the control plane ("master") nodes have been free. In addition to the raw compute costs for those nodes, there's the SRE overhead for managing, upgrading, and securing them.

> ...but I generally assumed that that cost was amortized out

<googlehat>I'm not really sure.</googlehat> <civilian>My guess would be that, initially, this was the case. However, over time, people have created many zero-node clusters. Now the amortization isn't. Again, pure speculation.</civilian>

> But, isn't that quotas are for?

See my comment above about zero-node clusters.

> I have a new $73/mo. fee attached to my account (which, is not the end of the world) is that this really comes out of left field...

Acknowledge, but I do want to highlight that changes take place a few months from now (June 2020), not immediately. Furthermore, each billing account gets one zonal cluster with no management fee.

> Is this the precursor to you all discontinuing GKE because, as the DevRel class likes to tweet, nobody should be using Kubernetes if they can use (more expensive) services like Cloud Run?

100% no. Also, Cloud Run is almost always cheaper than running a Kubernetes cluster.

> Are we about to get Oracled?

I'm not sure what you mean by that verb.

Thank you for the detailed response.

> In addition to the raw compute costs for those nodes, there's the SRE overhead for managing, upgrading, and securing them.

By that logic, can we expect to see charges for GCP Projects and the GCP Console? Cloud IAM?

> people have created many zero-node clusters

I'd be really curious what is driving folks to do that. Are they using the backplane for CRDs and custom controllers and no compute?

This feels like it could be addressed similar to alpha clusters, or with a quota, e.g.: clusters with 0 nodes for > 24 hours will be terminated?

Separately, It seems like handing everyone 3 months to figure out what to do about a new $73 * X fee isn't the best plan. Including some kind of estimate in the emails that were sent out would have been helpful. There was a change in pricing for StackDriver a while back that did this. It was very helpful to understand how we would be impacted.

> Furthermore, each billing account gets one zonal cluster with no management fee.

My feedback is that you would probably get getting way less blowback if that free-tier didn't come across as inadequate. I can appreciate that there are use-cases where it makes sense for you all to be charging. But one zonal cluster... It makes the whole thing feel punitive.

> I'm not sure what you mean by that verb.

I have a feeling we're all about to go on a journey of discovery together.

> I'd be really curious what is driving folks to do that.

I was one of those people. I got an email from Google this morning, and thought "that's weird. I didn't even know I had a Kubernetes cluster." I think I created it years ago to work through a Kubernetes tutorial and, since it was free, I never bothered to delete it.

So, I can imagine this being a problem. Though it seems like having a minimum hourly charge per cluster would have been a better way to handle this (i.e. if your cluster is using less than $0.10/hr in resources, you get charged the difference). On the other hand, you'd probably still have people complaining about it because, well, people love to complain.

That seems like a really good idea, maybe they should look at doing that? As noted, $73 should be a trivial charge both from Google's perspective and the customer's for an actual cluster.

If abuse of zero-node clusters is an issue, wouldn't it be better to introduce a zero-node cluster fee the same way you charge for unused reserved IP addresses?

Also, how much resource does a 0-node cluster actually use on the control plane?

So couldn't you charge the control-plane for zero-node clusters?

This is really disappointing. I've been a big proponent of GKE not only to my employer but to my friends as well. I think it's the best Kubernetes implementation available. The justification for a management fee b/c there are abusers just feels like an excuse for making some extra revenue. Truly with Google's prowess you can detect and deal with abusers without having to raise cost for everybody. I'm worried this is going to dampen the momentum of Kubernetes adoption unfortunately...

It's not _just_ abuse. It's not _just_ the new SLA. It's also the additional functionality we've built beyond just Kubernetes and how simple we have made the offering and auto-scaling, etc.

Why is this change coming in? I can hardly see costs on Google's side having increased to provision and 'manage' K8S having increased over the past 3 years, especially given that it's used in production there. Also, given that no-cost K8S clusters was pushed by your sales and marketing teams back in 2018 as a significant benefit for switching to GCP, it doesn't really inspire confidence in GCP if we're just going to be shafted further on down the line. Lastly, $0.1/hour is expensive given that a 'managed' k8s cluster can be rolled out using Terraform and Ansible with a bunch of GCE nodes with minimal effort. This frankly just feels like a cash grab for those that are either inexperienced/unfamiliar with cluster management/provisioning, or from those that are in too deep with GCP and won't have another option other than to pay the piper, so to speak.

Thank you for the question. While I can't go into deep detail... as with most free things, people find a way to abuse the system. While we've invested significant effort to curtail such abuse, this is the road we've landed.

To your point about running your own K8S cluster - two things:

1. That's something you have always been (and still are) entitled to do.

2. Having personally run large-scale K8S clusters, the challenge isn't provisioning, it's maintenance, security patches, upgrades, etc.


You guys roll out a free service. You tell your sales people to hype it up as a benefit over other providers. You somehow don't anticipate that some users will "abuse" the free service, so you hike up rates for everyone?

Sorry, I don't think you're likely to find much empathy on this one.

Very disappointed. Not by the price increase per se... but by the lack of a reasonable 'always free tier'. I think you should strongly consider tweaking the pricing to provide one or two or three multi-zone clusters for free instead of one single-zone cluster. Let us see the power of GKE without the extra charge and grow on your platform. This would allow new companies to choose gcp over aws/azure and start out with a proper highly available cluster or two/three in different regions and grow to more clusters over time. With the new pricing you're forcing them to choose between a single zone cluster or $70 per month per cluster (or another cloud). Please consider tweaking the new pricing to enable a lower price ramp up for newer companies... why not offer three multi-zone same-region cluster for free and then charge more established enterprises using more than 3 clusters? I appreciate the money is in the big customers... but why scare away the small customers who want 1-3 highly available clusters behind a gclb to provide higher availability and lower global latency. The mindshare of developers will move away from gke if you're not careful... both to aws/azure and others like DO kubernetes.

I believe the community is keen to engage with you on this based on the comments in this thread. If your team would like to talk to a disappointment (very small but hoping to grow) customer I'd be happy to jump on a call. I hope others here would be happy to do the same.

I think you'd be interested in our Google Cloud for Startups program: https://cloud.google.com/developers/startups

It's a good program. I'm currently in the stage one step before that program. Would your team consider tweaking the pricing as I mentioned, with the goal of helping early stage startups choose GCP? GKE/kubernetes is increasingly not just for big enterprise. Personally I find GKE as easy as app engine or cloud run but much more future proof and more flexible/powerful... the real heart of a GCP to rival AWS. Just this week I set up Config Connector to provision a global load balancer and other GCP resources used by two clusters. An always free tier of two or three (ideally multi zone clusters) would I think go a long way to earn the trust and belief of many devs and early stage startups. As would coming back in the next few days with tweaked pricing based on community feedback.

Edit: Additional comments: You could limit the number of nodes in the always-free-tier clusters. Above n nodes the free tier clusters aren't free.

With the new pricing, I can't choose to use GKE instead of app engine/cloud run and get the same availability without having to pay for both the nodes and the new control plane cost. Those managed products run over multiple zones in a region. It's disappointing that even just one multi-zone cluster is charged.

Additional comments: You could limit the number of nodes in the always-free-tier clusters. Above n nodes the free tier clusters aren't free.

With the new pricing, I can't choose to use GKE instead of app engine/cloudrun and get the same availability (by this I mean multiple zones) without having to pay for both the nodes and the new control plane cost. Those managed products run over multiple zones in a region. It's disappointing that even just one multi-zone cluster is charged. I'd be very happy to see you include at least a single multi-zone cluster control plane in the free tier.

> Would your team consider tweaking the pricing as I mentioned, with the goal of helping early stage startups choose GCP?

To be clear, it's not my team. I'm relaying feedback, but I can't make any guarantees or promises.

All this feedback is super valid and important, and it's being synthesized to the product team.

I appreciate that. Thanks for being available on hackernews and helping relay feedback.

This change is pretty huge for non-revenue units & small teams at institutions and SMBs. These smaller teams often seem to run two clusters rather than try to split their production and dev environments within one cluster (I think this is even widely recommended for smaller, less experienced outfits). For many the management fee will probably be a large percentage cost increase for units that are very cost sensitive and require significant re-engineering to avoid for units where engineer hours a scarce resource.

Seems weird, given that GKE is basically the main reason people seem to use Google Cloud. These kinds of users aren't big fish, but I suspect a lot of them are going to run.

Hi Seth - I am a LONG time lurker here on HN, but this news just forced me to create an account.

I am part of a small company which has separated our deployment into a number of sub projects, some of which are: dev, staging, production, ci, etc.

The difference for us will be several hundred dollars per month, and that will make an actual (negative) difference for us. We didn't need a "financially backed SLA" before and we don't need it now.

You asked for a question and here it is: Why isn't a financially backed SLA a part of a billing negotiation? I mean, there are some really cool features in "Anthos" but I am not picking up a phone to find out how much that is going to cost.

If a really useful feature like "Cloud Run for GKE" is awkwardly placed in the "Anthos" box, then why isn't the SLA part of "Anthos" too?

Free clusters was a huge part of why we selected GCP. If this SLA nonsense isn't made optional, our next project is not landing on GCP.

Thank you for the feedback. I'll relay this to the product team. I feel your frustration and, unfortunately, I do not have much to offer beyond my promise to relay this feedback and the items I've expressed in other responses.

Hey Seth, thanks for taking to the comments here; sad I wouldn't be able to catch one of your talks at Next this year in-person.

I'd like to share some feedback that echoes that of other commentators, from a different perspective.

I run a local cloud developer community with regional pull for attendees, as well as working directly with local early-stage startups looking to become cloud-native.

GCP has always been my go-to for recommendation for our attendees (mix of developers and technical founders, and some enterprise technology folks) given the affordability factor, pathways to additional credits to flesh out ideas, learn new technologies, or stretch the limited runway of their new organization, and ultimately my belief that GCP is one of, if not the best, clouds for developers given the investment in documentation and engagement DevRel channels.

With the rollback of open-enrollment into a smaller plan of Google Cloud for Startups, and price changes like this, I'm fearing I've chosen the wrong hill to die on when talking with these new customers.

I appreciate the inclusion of a free regional cluster per account, which will still afford myself the opportunity to demonstrate k8s at meetups and to end users without taking more of an out-of-pocket hit, and for folks to learn on their own or maintain hobbyist projects on the same budgets they are accustomed to.

My fear with this announcement is that the negative repercussions of this will not be felt on the bottom line or figures that it seems more and more is the priority of the Google Cloud leaders. Rather, it will be felt hardest by the smaller customers; the hobbyist developer or technical co-founder looking to learn new technologies, to scale up their operations, and who at least in my experience, are driving growth in the mindspace around GCP in their communities.

Put another way, moves like this will further tarnish the reputation of Google for those who the sales engineers have for the last two years, promoted heavily no cluster management fees like "the other guys," and in the eyes of many starting out in these areas (of which I recognize most will never become the big customers that satisfy the requirements of executives).

I hope that when the dust settles, this does not lead to a retraction of what makes Google Cloud great in my mind, which is specifically to developer experience and outreach.

With that in said, I would suggest really driving home this change through dismissable in-console communication at the point of cluster creation, on the dashboard, and in email communication to folks who this will impact, with a clear picture of the impact on them. No one wants another large disruption of thousands of small organizations and users, as was the case with the Google Maps pricing change.

Personally, I'd love to see it increased to one free regional or zonal cluster per account for the remainder of 2020, and then making only one zonal free per account effective 2021. Given the uncertainty around engineer capacity and scheduling given the ongoing human malware crisis affecting companies large and small, I think this could be a good middle ground to satisfy most customers affected by these changes, while still achieving the objective of moving this away from being a loss leader of sorts.

This feedback is super valuable - thank you for sharing. I'll be sure to relay it to the product team.

Hi Seth, two messages this change is sending: 1) GCP can arbitrarily add additional fees to services we consume whenever a PM is under pressure to increase revenue. 2) GCP pricing only goes one direction: UP

I know you are just the messenger here, and I send my sincere sympathies that you have to work with a product manager there that can't compute strategic impact of this change :)

Oh, so now in addition to Google's reputation for killing services, GCP wants a reputation for raising prices?

They have done it before. Remember Google Maps?

They already had this reputation.

I've been mentioning Google's penny pinching for awhile here. Simple things like Chrome's address bar showing google searches before my bookmarks. It's all part of the monetization. Are fees like this because Google is struggling to continue to grow? That's probably the most concerning thing for Google's future, rather than a $73 fee.

For those who had a chance to look deeper into k8s control structure, can comprehend such movement. There's no free lunch. master nodes, etcd and all that stuff in HA has its costs. That's it. Surprisingly AWS announced 50% price reduction for EKS control plane :)

If the trade is a SLA for a management fee, that is a reasonable business decision, and largely a rounding error for a decently sized company with a well designed clustering system. Lack of SLAs is a major issue IME with cloud providers.

We're a little bit too locked to GCP to migrate out.... But I'll definitely stop evangelizing GCP to people.

I have now lost faith in you, GCP.

This is really disappointing. GKE was a staple amongst Kubernetes adoption, not only for the feature-set but also that there were no overhead costs.

I hope GCP re-thinks this.

For folks just trying it out, 1 cluster is still free.

For folks just trying it out, 1 cluster is still free... in a single physical data centre. Sadly you'll be charged for running a cluster across two or three data centres in the same availability zone (eg London).

This is a fair point. We don't have an HA (multi-master) zonal offering either, because mostly people don't want that.

I don't offhand remember AWS increasing prices for a service before but I might be wrong. How often does Google increase prices?

For a business I prefer a company that starts with higher prices and then only lowers them to one that may increase them at any time.

I was wondering the same thing. Nothing comes to mind but there are enough services I don't use that its easily possible something slipped through.

AWS lowering compute costs is fairly largely shared, but I am curious if anyone has compiled a list of the cloud providers (AWS, Azure, and GCC) increasing the costs of services.

Google Maps, GCE, this the second time I see Google increase price on highly important parts of business. On the third case it will be a trend.

Makes me rethink whether I want to do any business with Google anymore.

That's about $72 a month, which matches Amazon EKS's lowered pricing. I guess that rules out my hopes of EKS not charging for the management plane in the near future.

I mean... if I were Amazon, I'd eliminate those management fees this afternoon just to spite them. I'm sure it's a rounding error as far as AWS is concerned.

It looks like Google's offering one free cluster per billing account if I read that right. If that's not the case, I'll be turning off my cluster before this kicks in.

One zonal cluster per billing account is free

Note: Azure does not charge a master/cluster management fee (bias disclosure I work for Microsoft)

I'm not sure any provider appearing to capitalize on a momentary pricing decision is a great idea. All are too big and things change too fast for any crowing as if anything is a long term decision.

Replies on another thread by alleged Google employees appear to indicate they are learning on the fly that doing support matters and costs money.

There are a lot of comments from people wanting free things. You may get those things for a while but it won't last. Free only works for so long in anything no matter how hard that is for some people to figure out across all aspects of life.

Not looking to capitalize just looking to state a fact that has been in place since the service was launched. In the earlier comments people were discussing/suggesting alternatives and I believe AKS is worth considering.

Given this news, I suspect that won't last.

It wouldn't surprise me if Azure charges for control plane at some point. Seems like EKS and GKE can get away with charging so why not...

Attempting to capitalize on a competitor's announcement isn't cool and doesn't look great.

I don't work for Azure and having used both GKE and AKS, I can say GKE is superior, however, I don't understand why this comment isn't cool. It's not like they're capitalizing on your misery. It's a decision you have taken and it's a fair game IMO that they want to emphasize their advantage over you.

Seems fine to me. Frankly, what isn't cool and doesn't look great is you not disclosing you work for Google in this comment.

Wow, this is a huge bummer. A lot of our infrastructure assumptions have been based around having several small GKE clusters.

I think this trend is not good overall, and people will eventually be very unhappy with it. I'd rather help you figure out how to use fewer clusters.

Our cloud service here at confluent is designed around giving customers their own infrastructure. A lot of the times, that means giving them their own k8s cluster. The management overhead there isn't the issue however.

The real issue comes into play when you try to make developer environments.

To give our developers any semblance of a "real production-like" workload, they need to work with an entire kubernetes cluster - maybe even a couple - to simulate what's happening in production.

This means at any given time, we have hundreds of GKE clusters because each developer needs a place to try things. Yes, these are ephemeral and can be tossed aside, and yes they cost a tiny bit in VM prices, but adding a per-cluster management fee is going to skyrocket this expense and push us towards trying to figure out ways to share these clusters between developers, which defeats the entire purpose of the project.

We'll have to seriously consider abandoning GKE for this use-case now and that sucks, because it's by far the fastest managed k8s solution we've found so far.

Try KIND. Much better devex.

Tim, would you be willing to elaborate on why you dislike the "many small clusters" pattern?

Many small clusters just do not deliver on a lot of the value of Kubernetes. Clusters are still hard boundaries to cross (working to fix that). Utilization and efficiency are capped. OpEx goes up quickly.

There are reasons to have multiple clusters, but I think the current trend takes that too far.

TO BE SURE - there's more work to do in k8s and in GKE.

Actually, could you elaborate on the benefits of your approach? edit: I am asking because this is counter intuitive to anything I'd want to solve with K8. Specially when it comes as a managed service.

I'm sorry to hear that. You can use namespaces and and separate node pools to isolate workloads. We'd love to hear more about your use case for having many small GKE clusters.

Hey Seth, I know you used to work at Hashicorp on vault. I think Vault recommends that if you want to deploy it on Kubernetes, it should have the cluster to itself.

That's correct. Vault Enterprise (at my last math) was ~$125k/yr, so that management cost is negligible :)

I am really disappointed by this. One of the reasons we moved to GCP from AWS was the ability to create multiple clusters at no extra charge. Now it looks like the pricing matches EKS.

Thomas Kurian is going to be the Steve Ballmer of Google, and he's currently tearing up Google from the inside. Google leadership need to wise up and give this guy his walking papers.

he is like GKE, once you start him, you can't stop :)

The good ol' bait and switch tactic; lure new customers in with zero fees and switch out the fee structure once they're locked in.

I'm not sure why everyone is so surprised. Part of Google's monetization model is to offer developer-friendly software for free, then charge a small fee once it crosses the headache-to-replace threshold.

$72/month per cluster, regardless of the size. It's interesting they're not charging per managed node (outside of the regular machine cost), makes it steep if you want to have a small cluster up.

Each billing account gets 1 free zonal cluster so the cost for keeping a small cluster up won't change.

Currently I have a small cluster with 3 nodes running, each in a different zone (data centres next to each other, eg in London). Sadly this will now be charged.

How do if i know if my cluster is zonal? Does it mean all nodes are vms provisioned in the same zone?

>Single-zone clusters

>A single-zone cluster has a single control plane (master) running in one zone. This control plane manages workloads on nodes running in the same zone.


AKS still has a free control plane. GCP won my business for a bit, but quickly lost it based on some features. I still love StackDriver and BigQuery, but don’t love doing business with GCP- the sales/support experience was pretty lacking, networking for serverless was immature for multi-region, and what they are doing with Anthos feels like Oracle (it is).

This is seriously such a bummer. This was the main reason for us moving out of AWS

We also jumped over for this reason and had to deal with a large number of gotchas from GCP. I kinda wish I never spent the long nights on this...

I think this is fair: for hobbyists, a free zonal cluster is sufficient and you probably wouldn't use more than one cluster. For businesses/revenue drivers, the $7.30/mo/cluster is nothing (EDIT: actually $73/mo/cluster, which may be a tougher sell but if the business is in a case where it benefits from Kubernetes, it's still likely insignificant relative to the cost of actually running the VMs on it )

What's unfair about the situation is the fact that Google's sales people hyped this up as an advantage over other providers for years.

Just doubling down on the free zonal cluster as a hobby tier. Folks seem to be missing that among the announcement.

If I'm not mistaken, it should be $73.00+/mo

Oops, I can't math. Fixed.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact