A lot of the work is only because of the large size of the codebase. E.g. Forge and ObjFS, which is necessary because compiling the really large binaries on a normal workstation would OOM it. Or take days. https://bazel.build/basics/distributed-builds
If your codebase is "normal-sized," you don't need nearly that amount of infrastructure. There is probably some growing pain when transitioning from normal-sized to "huge," but that's part of the growing pain for any startup. You're going to have to hire people to work on internal tooling anyway; setting up a distributed build and testing service (especially now there are so many open-source and hosted implementations) is worth the effort once you're starting to scale. You're going to have to set that up regardless of a mono-repo or many separate repos.
It's probably only worth hiring serious, dedicated teams that work on building like Google once your CI costs are a significant portion of operation. That probably won't happen for a while for most startups.
I think that's a bit misleading (disclaimer: I very much like Bazel existing, though I think a better version of it could exist somewhere).
Surely a lot of work is put into Bazel core to support huge workflows. But a huge amount of work is put into simply getting tools to work in hermetic environments! Especially web stack tooling is so bad about this that lots of Bazel tools are automatically patching generated scripts from npm or pip, in order to get things working properly.
There is also incidental complexity when it comes to running Bazel itself, because it uses symlinks to support sandboxing by default. I have run into several programs that do "is file" checks that think that symlinks are not files.
Granted, we are fortunate that lots of that work happens in the open and goes beyond Google's "just vendor it" philosophy. But Docker's "let's put it all in one big ball of mud" strategy papers over a lot of potential issues that you have to face front-on with Bazel.
Personally I think this is what companies should do -- it guarantees hermeticity as you say, guards against NPM repo deletion (left-pad) and supply chain attacks. But for people who are used to just `npm install` there is a lot more overhead.
personally I don't think there is that much value in society in endlessly vendoring exactly the same code in various places. This is why we have checksums!
I understand that Google will do this stuff to remove certain stability issues (and I imagine they have their own patches!), but I don't think that this is the fundamental issue relative to practical integration issues that are solvable but tedious.
EDIT:I do think people have reasons for doing vendoring, of course, I don't think that it should be the default behavior unless you have a good reason.
If your codebase is "normal-sized," you don't need nearly that amount of infrastructure. There is probably some growing pain when transitioning from normal-sized to "huge," but that's part of the growing pain for any startup. You're going to have to hire people to work on internal tooling anyway; setting up a distributed build and testing service (especially now there are so many open-source and hosted implementations) is worth the effort once you're starting to scale. You're going to have to set that up regardless of a mono-repo or many separate repos.
It's probably only worth hiring serious, dedicated teams that work on building like Google once your CI costs are a significant portion of operation. That probably won't happen for a while for most startups.
https://github.com/bazelbuild/bazel-buildfarm
https://cloud.google.com/build
https://aws.amazon.com/codebuild/
https://azure.microsoft.com/en-us/products/devops/pipelines/