The open source models are pretty good too now. They are a few months behind, but not more than that. Sure, you still have to host them in the cloud to get enough VRAM to run them - but looking 10+ years into the future the end game here will probably be having a local LLM that runs on your own computer and is more than capable enough to do coding for you.
I've been asking myself the same question. Realistically I think it depends a lot on how many providers are available in the future. If you lose access to one you can move to another, its not a single point of failure per say. I think this question gets a lot more relevant if the providers get more monopolized instead of gaining wider spread, so far we've only seen more providers appear.