> unless there are entirely different design patterns we have failed
It’s not that we’ve failed to find different design patterns, it’s that we found these patterns in the 70s and haven’t done much with them since. Since C there has been a pretty constant march toward more imperative programming, but imperative programming I feel has reached its peak for the reasons you describe.
We’re only just starting to explore the functional programming space and incorporate those learnings into our work. But what about logic programming, dataflow programming, reactive programming, and other paradigms that have been discovered but not really fully explored to the extent imperative programming has been? I think there’s a lot of room for improvement just by revisiting what we’ve already known for 50 years.
The imperative design matches the hardware too well to just dispense with. Rust's abstractions are probably the closest we've gotten to a composable design that fits closely to the hardware while mechanically preventing writing the most common bugs at that level.
That said I agree that we've barely scratched the surface of the space of good paradigms; I'm partial to logic programming but most are underexplored. Perhaps other approaches can use Rust or its IR (MIR?) as a compilation target. As an example, this strategy is being used by DDlog ( https://github.com/vmware/differential-datalog ).
> The imperative design matches the hardware too well to just dispense with.
I don't think we should dispense with it for that reason, but we also then have to admit imperative programming doesn't match the design of promised future hardware as well as it has past hardware. The future will be focused around manycore distributed heterogenous compute resources like GPGPU, neural cores, computer vision accelerators, cloud compute resources, etc.
Yeah future computing hardware will be more diverse, but most of the things you mentioned are ultimately be programmed imperatively. GPGPUs are converging to look like many tiny general purpose CPUs, neural cores and accelerators are implemented as specialized coprocessors that accept commands from CPUs, distributed cloud compute is just farms of CPUs. All of these things have an imperative kernel, and importantly every one of them care very much about path dependencies, memory hierarchy, and ownership semantics, the very things that Rust brings to the table.
Even languages with 'exotic' execution semantics like prolog's Unification is actually implemented with an imperative interpeter like the Warren Abstract Machine/WAM, and there's not an obvious path to implementing such unorthodox semantics directly in hardware. Low-level imperative programs aren't going away, they're just being isolated into small kernels and we're building higher level abstractions on top of them.
Sure, but that's not really imperative programming. That's imperative programs finding their correct niche. It's a shift in perspective. Today we do imperative programming with a little distributed/async/parallel/concurrent programming sprinkled in. In the future distributed/async etc. will be the default with a little imperative programming sprinkled in.
It’s not that we’ve failed to find different design patterns, it’s that we found these patterns in the 70s and haven’t done much with them since. Since C there has been a pretty constant march toward more imperative programming, but imperative programming I feel has reached its peak for the reasons you describe.
We’re only just starting to explore the functional programming space and incorporate those learnings into our work. But what about logic programming, dataflow programming, reactive programming, and other paradigms that have been discovered but not really fully explored to the extent imperative programming has been? I think there’s a lot of room for improvement just by revisiting what we’ve already known for 50 years.