One thing that might help is that state space explosion is mainly caused by mutability. Because each mutation potentially creates branching that can be difficult or impossible to statically analyze.
In other words, imperative programming with const is directly transpilable to/from functional programming.
Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.
Which also has implications for multiprocessing. When all data is const, many of the async edge cases that we're normally forced to deal with go away. Loosely that means that we can use techniques similar to double buffering to create new mutated copies where only the part that changed was actually copied. The rest of the data can reference one source of truth. In practice, this turns out to most closely approximate optimal algorithms, where the best-effort imperative code may paint itself into a corner because logic design choices get dictated by the realities of the runtime instead of the problem domain itself, so the optimizer loses opportunities to simplify it.
I find that in my daily work, nearly all of my time is spent untangling highly-imperative mutable code. Because I must hold the entire state in my mind when mutable references (mostly to objects) are passed between functions. My higher-order method solutions often end up shorter, more readable, faster, even more efficient, but harder to explain to junior developers. So I would really like a compiler that implicitly converts imperative code like for() loops and references into const/functional code whose intermediate code (i-code) can be reduced to its simplest form.
This is also my main concern about Rust and other mainstream imperative languages like C# and even Javascript, that they encourage bare-hands methods that go away with const/functional code if the user is willing to accept a doubling or more of memory usage. Which becomes less of an issue over time as memory cost decreases.
Edit: I maybe should have said fork-join instead of scatter-gather, but they are related concepts.
> Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.
I have built a decent amount of multithreaded and distributed systems. I agree in principle with everything you wrote in that paragraph. In practice, I find such techniques add a lot of unstable overhead in the processing phase as memory is copied, and again while the results are merged, conflicts resolved, etc. They also lock you into a certain kind of architecture where global state is periodically synchronized. So IMO the performance of these things is highly workload-dependent; for some, it is barely an improvement over serialized, imperative code, and adds a lot of cognitive overhead. For others, it is the obvious and correct choice, and pays huge dividends.
Mostly I find the benefit to be organizational, by providing clear interfaces between system components, which is handy for things developed by teams. But as you said, it requires developers to understand the theory of operation, and it is not junior stuff.
Completely agree that we could use better language & compiler support for such things.
In other words, imperative programming with const is directly transpilable to/from functional programming.
Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.
Which also has implications for multiprocessing. When all data is const, many of the async edge cases that we're normally forced to deal with go away. Loosely that means that we can use techniques similar to double buffering to create new mutated copies where only the part that changed was actually copied. The rest of the data can reference one source of truth. In practice, this turns out to most closely approximate optimal algorithms, where the best-effort imperative code may paint itself into a corner because logic design choices get dictated by the realities of the runtime instead of the problem domain itself, so the optimizer loses opportunities to simplify it.
I find that in my daily work, nearly all of my time is spent untangling highly-imperative mutable code. Because I must hold the entire state in my mind when mutable references (mostly to objects) are passed between functions. My higher-order method solutions often end up shorter, more readable, faster, even more efficient, but harder to explain to junior developers. So I would really like a compiler that implicitly converts imperative code like for() loops and references into const/functional code whose intermediate code (i-code) can be reduced to its simplest form.
This is also my main concern about Rust and other mainstream imperative languages like C# and even Javascript, that they encourage bare-hands methods that go away with const/functional code if the user is willing to accept a doubling or more of memory usage. Which becomes less of an issue over time as memory cost decreases.
Edit: I maybe should have said fork-join instead of scatter-gather, but they are related concepts.