Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree against microservices where they aren't in real need, and that a single database can go along way if well designed, since ram today is abundant everywhere and stuff like moving indexes on dedicated disks with dedicated iops is as easy as ever. but why the hate against kubernetes?

even your bog standard java three tier app gets some tangible benefit from being dockerized (repeatability, whole os image versioning) and placed in a node that abstract everything else (load balancing, routing, rolling deployments etc)

at the lowest level, it's just a descriptor for infrastructure, it impacts nothing of your app or business logic unless you want it to.

even single node testing environments are covered one way or another (my fav now is minikube), the only real limit is one having a requirement for self hosting.



> but why the hate against kubernetes?

You seem to equate kubernetes and containerization. Kubernetes is just too complex for most usecases, that's all. There are simpler solutions (in the sense of less powerful) that still allow using containers and scale-out but are less maintenance.


> You seem to equate kubernetes and containerization. Kubernetes is just too complex for most usecases, that's all.

Kubernetes is just a way to run containers. You configure your application, specify how your containers are deployed, set the plumbing between containerized application, and that's it.

What problems are you experiencing with Kubernetes that make you feel it's too complex?


I think a lot of people here conflates using kubernetes (not actually more difficult than any other container service) with running a kubernet cluster (a difficult mess and it was even included in the original post)


you need something to run the containers anyway, and with all the management overhead docker and other systems adds, one can just cut the chase and let an orchestrator do it


Or one can use a service that does the orchestration in a certain way so that one does not have to care about it, as long as one can live with the constraints of the service. E.g. AWS Fargate or GC Run.


> E.g. AWS Fargate or GC Run.

I really can't understand this line of reasoning. How exactly is something like Fargate simpler than Kubernetes?

I understand the argument in favour of using Docker Swarm over deploying your own Kubernetes on premises or on heterogeneous cloud, but Fargate?

And using GC Run to run containerized apps completely missed the point of containerization. Containers are way more than a simple way to distribute and run isolated apps.


Well, Fargate is simpler in the way that I define CPU+RAM, give it an image, tell it how many of them to run in parallel and let it run. If something crashes it will be restarted. If I want to deploy I push a new image.

That's pretty much it. The container cannot be accessed from the outside except for the defined interfaces like http, the containers cannot discover each other unless I use something else to coordinate between them. That is all restrictive, but it also makes things simple.

I still need to do health-check and such - so if I don't really need long running containers, I can make it even simpler by using lambda. (well in theory, I don't like lambda, but that's because of the implementation only)


> Well, Fargate is simpler in the way that I define CPU+RAM, give it an image, tell it how many of them to run in parallel and let it run.

Kubernetes allows you to do just that with about a dozen lines of a infrastructure-as-code script.

> That's pretty much it. The container cannot be accessed from the outside except for the defined interfaces like http, the containers cannot discover each other unless I use something else to coordinate between them. That is all restrictive, but it also makes things simple.

Kubernetes does that out-of-the-box, and allows you to keep your entire deployment in a repository.

> I still need to do health-check and such - so if I don't really need long running containers, I can make it even simpler by using lambda.

With Kubernetes, health checks are 3 or 4 lines of code in a deployment definition.

And it's used to manage auto-scaling.

Oh, and unlike Fargate, you can take your Kubernetes scripts and deploy your app on any service provider that offers Kubernetes.


A dozen lines of code is a lot. Especially when it's in a language you don't know and don't already have installed. Often, the most difficult program to write is "hello, world", since when it fails, it fails in ways that are difficult to understand and specific to your installation and infrastructure. It can be hard to get help for those.

I'm sure that once you've bitten the bullet and learned Kubernetes, these things are easy. But for a lot of use cases it's great to avoid having to bite that bullet, do a one-click thing, and get back to your core development.


> A dozen lines of code is a lot.

Are you really trying to argue that using 1 or 2 lines of YAML per component to define how an entire application is deployed and scaled is a lot?

How many lines of CloudFormation would you need to do just that?

And... I mean... What's your alternative to a infrastructure-as-code config file? Clicking around a dashboard to treat your app as a pet?

> Especially when it's in a language you don't know and don't already have installed.

It's self-describing YAML (or JSON). It's not a arcane special purpose DSL like cloidformation or SAM, or a convoluted Python program like CDK.


> Kubernetes allows you to do just that with about a dozen lines of a infrastructure-as-code script.

Apart from what was already mentioned - what if I exceed the number of machines in my cluster? What if I want XGB RAM and there is no such big machine in the cluster? I don't even have to ask these questions if I use fargate because I don't care about the underlying cluster at all.

> Kubernetes does that out-of-the-box, and allows you to keep your entire deployment in a repository.

You've got it backwards. Kubernetes allows a configuration where you can access the containers. And now, if I have to deal with a system using Kubernetes, I don't know if someone sometimes accesses the containers unless I look at the code and check that it is up to date / successfully deployed.

> With Kubernetes, health checks are 3 or 4 lines of code in a deployment definition.

Yeah and code in your running service and monitoring and alerting... and and and. Maybe for you these are small differences, but for me that is a huge maintenance burden.

> Oh, and unlike Fargate, you can take your Kubernetes scripts and deploy your app on any service provider that offers Kubernetes.

That's certainly an advantage. But it doesn't make Kubernetes simpler. And that is what we are discussing


> but why the hate against kubernetes?

Because it adds a massive account of (to some degree hidden) complexity. Especially it adds a lot of additional (potential subtle) failure modes.

And while this is more a property of micro services then of kubernets, there is some demand for a more opinionated simpler alternative. (There had been a HN article about that recently).


Reproducibility is great. I would probably employ Docker for that if that were my role. However, this is completely from my own limited personal experience: I've worked with multiple devops people at earlier stage companies who have harped on about how we needed to focus on potential future scale when the company had hardly any customers. This was detrimental to our ability as programmers to understand the domain and requirements due to everything being so much harder to implement and test. It's not that I didn't try. I am also not convinced you need to worry too much about things like load balancing in the beginning. In these contexts, I never saw Kubernetes being used for reproducibility alone.


Most of the app I write have a sla and a load balancer is a great piece to have ready for simplifying/automating rolling updates, even if the backend has a single node


Fair enough.


> even your bog standard java three tier app gets some tangible benefit from being dockerized (repeatability, whole os image versioning)

Whole OS apart from the kernel. It's rare that a Java app has a dependency on a particular OS version, but, in my experience, even rarer that the dependency does not include the kernel.


That was one of Java's big original selling points: write once, run anywhere.

They dropped that kinda fast. With, like, version 1.1.


No, not at all. People routinely develop on OS X and deploy on Linux without issue.

Here, though, i just meant that i can develop on Ubuntu 18.04, build on CentOS 6, and deploy on RHEL 7 without issue.


I mean, they dropped the slogan. The idea still held up, and in a lot of ways, still does, though the jungle of different versions and Maven version hell makes it less universal than the initial hope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: