You could design a OS with abstractions built around latency, instead of physical machines. It would still allow you to find and use resources according to their constraints of use, but wouldn't force you to keep track on which exact machine they are located.
I am not sure what do you need new OS for, and what could you get from OS that you cannot get from today's computing.
If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...
If you want remote calls to be indistinguishable from local calls on the source code function
level, this is also solved! Many RPC frameworks and remote SDKs provide class-based interface which acts the same as the local class.
The only place where OS can help is if any OS function can be magically located on another machine. But even then.. we have remote filesystem (NFS), remote terminal and execution (ssh), remote graphics (x11), remote audio (alsa/pulse)... What is left for the new OS? process management? is it worth it?
> If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...
Those resources are complex to program against. An OS should offer a simplified abstraction layer to make them as transparent as possible. And yes, process management is worth having a unified programming model that doesn't force you to keep track of where each process instance is being located - that's essential for massively parallel computing.
Of could this could be done with platforms for massively parallel computing. The point of building an OS would be to put these platforms as close to the metal as possible to improve their efficiency.