Of course the problem he has is that it doesn't follow the hot new immutability fad, and of course he doesn't explain what he needs it for, just links to a stack exchange question which essentially says that it might be useful for some cases.
It seems a lot of these articles, and programming language theory in general, is just complaining about features without any thought as to why they matter. What is this "problem" stopping him from doing? Why does it matter? And why should I abandon a language that has served me very well just because some edge-case feature is not supported?
Spot on. As a card-carrying dinosaur I've found myself from time to time needing to read up on some "new" (usually turns out to have been invented in the 60's) coding thing.
Once I figure out what it is, I ask myself what problem it solves (the literature typically doesn't say). The answer tends to be one of:
1. Saves some typing.
2. Saves some work when refactoring.
3. Avoids some class of bug.
4. Highly useful in a kind of programming I don't do (e.g. compilers).
#1-2 in my experience are much more common than #3-4 and often #1-2 can be dealt with through tooling (IDEs for example).
1. What the heck is this thing everybody keeps talking about, with an impressive fancy name? It must a major step forward, after all this time, I should finally have a look at it.
and then
2.a. Oh, it's just a big shiny name for something I accidentally kinda do sometimes, I didn't know it bore a name. Why is it suddenly a fad to base everything on this?
or
2.b. Hmmm... right... so in C it would mean doing this and this... hmmm... okay... and why would I want to do that? I could do it in C, if I haven't done it yet, that's because I didn't see the benefit of it, and I still don't.
Being able to see it happen in the debugger is my top peeve in this space. Yes there's a magic short cut for something, but no you can't see wtf is happening at runtime so when inevitably something seems to happen that doesn't accord with the expected outcome you are none the wiser as to why.
Possibly. But I use other languages, and I sometimes use a few of those other features in those languages, but I don't miss them at all when I come back to, let's say, C. They don't fit, or don't bring benefit, etc.
Absolutely. With respect to this article and immutability, the strongest arguments I've seen for it are for concurrent programming, like the very successful Erlang solutions used at Ericsson, which I generally don't have an interest in. So, it is reason #4 on your list. For the types of program and the types of problem I need to solve, it's a moot point - people have been getting along just fine without immutable state and continue to do so, and to learn how to do it differently would just be more trouble than it's worth, if not an outright regression.
This is the major problem I have with this sort of article; people never talk about languages in the context of actual working programs, just engage in dry theorizing or armchair programming. If the author put the complaint from this article into context, I'd not have a problem with it. But because it's a toy example with no relevance to actual programs people might want to write and use, it's unhelpful and misleading.
Limiting mutability is not new and it's not a fad. It is at the very core of every single principle of modularity and dependency management ever invented by mankind, even beyond programming.
Almost 30 years ago when I first learned C one of the first design principles I was taught was to avoid mutable global variables. And ever since that time I haven't found anything as key to avoiding bugs as knowing and controlling exactly what code can change what data.
I find it totally baffling that a practicing programmer could possibly think of mutability as a purely theoretical concern.
Don't move the goalposts. Global state is not the same issue as immutable data. And even if it was, Go has the `const` keyword anyway, making it entirely irrelevant to the OP article.
This fad of using immutability everywhere and treating it like a silver bullet absolutely is new and is a fad.
The article isn't advocating "immutability everywhere". It complains about Go making it difficult to create immutable (or otherwise customized) data structures at all. So it is you who is moving the goalposts.
I mentioned global mutable state as an anecdote and also as an example on one extreme end of the spectrum that should make it clear what the problem is in principle: Not knowing what code changes what state.
But I see that you have decided to avoid debating the core issue entirely.
Your mention of Go's const keyword in this context leaves me scratching my head as you probably know that it doesn't allow you to define immutable maps, global or otherwise.
One thing you're correct about: I have avoided debating it, because debating concepts like this is pointless and has nothing to do with real-world code. Which brings me back to the point I made on the article: why does he care so much that he can't have an immutable map? What problem can he not solve as a result of it? The example he's given is a toy problem, and the constraints that make him want such a thing are left unspecified.
>I have avoided debating it, because debating concepts like this is pointless and has nothing to do with real-world code.
It does have everything to do with my real world code. There is not a single piece of code I have ever written that let me get away with not keeping track of all the mutations that could possibly occur.
Let's leave aside complete immutability for a moment and consider a closely related issue that you can see in his code. He needs a map with defined iteration order, something I have needed occasionally in real world code.
He uses Go's builtin map type and a slice to define such a map. Now, how do you keep these two data structures in sync without restricting mutation? I think it's obvious that controlling mutation is a requirement here.
Go even admits that restricting access to data (i.e encapsulation) is necessary, and obviously preventing inconsistent mutation is a key reason why that is needed.
But then Go turns around and says, oh but you can't define fully featured data structures (i.e supporting range loops, type safety and indexing expressions) of your own that use encapsulation.
I have defended Go many times before, but the only way I can defend it is to say, yes it's true, this is a major weakness. We just don't know how to fix it without creating a lot of complexity elsewhere.
Also, sometimes the people making the argument for language feature X don't fully understand the existing solutions to the problem, or how much of a practical problem it is. E.g. the "less typing" concern -- when you write code day in day out a bit less typing really isn't in your top 10 list of concerns. That list would more likely include things like "easy to understand what the code does" and "easy to debug".
It seems a lot of these articles, and programming language theory in general, is just complaining about features without any thought as to why they matter. What is this "problem" stopping him from doing? Why does it matter? And why should I abandon a language that has served me very well just because some edge-case feature is not supported?