I'm not here to pick a side. I'm sure all of us can be more friendly to each other.
However, I'm a bit curious about when you say
> Suggesting that someone pours over documentation (and laughably source code) to explain patterns (or a lack thereof) in a highly abstracted language is counter-productive.
Isn't that what documentation is for? To learn about whatever's being documented. What would you suggest would be the better way to convey this information?
For source code, I can see your point. Although in my experience source code has the benefit that you can be certain that it doesn't lie. It does exactly what it says. Sometimes this can be a really nice benefit when trying to figure out what's going on. Regardless of the language or environment you are in.
I would agree, if everything was documented. The issue at-hand is what is not explicitly documented. What constitutes a guard-worthy function? ie A pure javadoc without any notation about what the API does, is not sufficient.
One thing not mentioned by other replies is that the discussion around the story of Last of Us got very polarized when the second game dropped. There was a very loud opposition from part of the fanbase to certain plot points, themes and other decisions made in the story. This made the people who still liked it get very defensive about it.
I see that same controversy continue with the discussion around the show which, typical to reddit, manifests as downvotes, reports and name calling (woke or bigots, depending on side)
Having followed the language for quite a while, including some of the controversies, I think there's two things you need to be mindful of:
1. The language version starts with 0, i.e. it is being clearly communicated that there is no guarantee of stability
2. The language development is being run by an opinionated BDFL who likes to take his time to think through new features. There's more of novel, well-thought-out ways of doing things, but less support for "I need this for my production use case now"
Neither of those points is inherently bad, but if you plan to adopt Elm in your product or codebase be prepared to rely only on the feature set you get from current version. Expect breaking changes in the future when new version drops.
There's lot to like about the language and it can be a good introduction to functional programming if you are not yet familiar. IMO it is definitely worth spending time on to improve your skillset and to gain new perspectives on programming.
I don't want to claim I am an expert on different programming languages. I really wanted to supplant my JS use-cases with Elm, but the ecosystem is too closed for proper collaboration.
The biggest restriction on adoption is that the language is a gatekeeped brainchild of limited number of devs. They are inclined to build the language around use cases they want, instead of what people who use their language want.
They have effectively closed the gate to having other talented FOSS devs supercharge this language with features that allow it to run circles around other languages currently in production systems (mainly JS)
It's their right to do so if they wish, but it means that I can't consider this language in any serious capacity. It means the language lacks flexibility and reactivity to changing circumstances, and while it describes having a default library without extendible ecosystems as a "feature, not a bug", they've essentially put a ceiling on what is practical with their language in doing so.
I really wanted to like this language, but the more I learned about it, the more I felt disappointed over what could have been.
> if you plan to adopt Elm in your product or codebase be prepared to rely only on the feature set you get from current version. Expect breaking changes in the future when new version drops.
Conversely, though, the language owners also don't get to have it both ways: if they aren't interested in committing to stability and (some measure of) implementing these production use cases, then they should not act, or promote it, as if it is a production-ready project.
I've always been bothered by the (sometimes massive amount of) hate that I see in the internet towards agile. The experiences from other developers always seem a bit off, like something is not right.
I think your point might be the missing piece. Agile might not mix too well with fixed timeframes or fixed budgets, but rather needs an environment of continuous development where the requirements have room to drive the project within wide enough budgetary boundaries.
This to me feels sort of a natural way to build things. If we need something we build it, otherwise we don't. And those "needs" might pop up at any time, by external (customer requests etc.) or internal (new techical requirements become apparent as the project is being developed) events.
My experiences of agile development have been from companies without deadlines and are generally positive.
The reason is that enterprise implementations of "agile" are often the opposite of agile: waterfall with no frequent releases nor customer feedback, but lip service to agile practices and middle management ceremonies.
If you go through the agile manifesto and compare, these implementations violates all the points listed.
> The experiences from other developers always seem a bit off, like something is not right.
That's the point.
Agile is difficult to do exactly right. When done wrong my perception is that is produces far more dysfunction and stress at an individual level than alternative project management methodologies.
If you've heard the fitness saying, "the best workout plan is the one you stick to," my feelings about project management are pretty similar. Agile may be great, but if your organization isn't capable of sticking to its central ideas then it's just not going to work. In that situation you're better off picking a methodology that's less efficient but easier to implement (like waterfall).
There are also some fundamental challenges presented by the methodology that are genuinely difficult to deal with. These tend to be ignored and then they manifest as dysfunction elsewhere in the system.
"agile" (the manifesto) was stating the obvious for people working on a product that had a lifetime longer than a "project". It works for products that have an ongoing life.
What it doesn't work with is environments with projects and budgets that are quarterly or annually assigned. That's where abominations like "scaled agile" have arisen.
Scaled Agile (TM) and Scrum and all the rest of the ceremony are like ORMs are to SQL database, an attempt to correct the impedance mismatch between the way software teams work and the way companies work.
It's probably because Agile devolved from a system teams use to keep track of developer progress on bugs and features to some weird cult lead by managers who have 20 hours of meetings a week and can't begin to describe half of the projects they're meeting about.
How would your team motivate that it should keep its headcount during a budget cut? Without deadlines they could just say "Since nothing you do is urgent we will cut your headcount in half, other teams needs it more".
Urgency doesn't imply value. Time horizons can vary and while there tends to be a correlation of cost and time horizons (higher urgency - higher costs, lower urgency - lower costs) that doesn't mean one is more or less valuable, it just costs more.
ER doctors may provide urgent surgery or medical intervention that's life saving and usually that costs a lot more than say long term chemotherapy or say HIV management with a specialist. Both are life saving, it just turns out that one conveniently has a longer time horizon which makes it easier to juggle to reduce costs while the other requires full attention and makes it difficult to juggle clients.
You would hope management doing budget cuts would adjust budget cuts based on value provided. This is arguably difficult to quantify in many cases but it shouldn't be quantified by how many, likely artificial, deadlines a group has and how busy they look. That's just silly.
> under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.
Can you elaborate on that?
This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.
Would you by any change have any recommendations on reading on the topic?
Sure. Fallacies, as usually stated, tell you when something that feels like a logical entailment isn’t actually a logical entailment.
Intuitively, people find “bob is an idiot so he’s wrong” a reasonable statement.
Technically, the implication does not hold (stupid people can be correct) and this is an ad hominem fallacy.
However, if we analyze this statement from a Bayesian standpoint (which we should), the rules of entailment are different and actually bob being stupid is evidence that he’s wrong. So maybe this is actually a pretty reasonable thing to say! Certainly reasonable people should use speakers’ intelligence when deciding how much to trust speakers’ claims, even though this is narrowly “fallacious” in an Aristotelian sense.
I’m not aware of any reading on this topic. It seems under-explored in my circles. However I know some other people have been having similar thoughts recently.
No disagreement with the main thrust of your comment, it's a very good one and imo goes to the heart of the seemingly intractable divide between the logician's approach to truth and that of damn near everyone else - which tends to leave the logician reasoning into the void, doing not a bit of good for anyone.
However I myself would probably label the statement "Bob is an idiot" (or perhaps less abrasively, "Bob has often been wrong in the past in easily verifiable ways") not as evidence that he's wrong per se, but as a signal, possibly a rather strong signal, that he is likely also incorrect in the current matter.
A minor semantic quibble, but in my own experience I've found that conceiving of it as such helps frame the situation as a "sensor fusion of individually unreliable data sources" type of problem, as opposed to one of "collecting experimental results in a logbook and deriving conclusions from them."
The latter of which can lead pretty seamlessly to a towering edifice of belief built upon some ultimately pretty shaky foundations. Ask me how I know ;)
Yes, 100% agreed! Your post reflects my feelings on this.
It's important to understand that something being a "logical fallacy" just implies that you can't unilaterally justify conclusion X by using reasoning Y.
But that does not mean that reasoning Y is not valid or helpful in understanding conclusion X.
Ultimately it's important to justify your views with sound reasoning, but life is full of heuristics, so often use of heuristics to reach a conclusion can be reasonable. It just means the conclusion is not definitive from a logical point of view.
Ideally you use a combination of logically sound and heuristic based statements to support an argument.
Following your Bob example... It's important that the person making the argument uses stronger reasoning than just calling Bob an idiot. But agreed that it's a totally valid point of supporting evidence.. assuming that Bob is an idiot is a fairly agreed upon statement.
However, I'm a bit curious about when you say
> Suggesting that someone pours over documentation (and laughably source code) to explain patterns (or a lack thereof) in a highly abstracted language is counter-productive.
Isn't that what documentation is for? To learn about whatever's being documented. What would you suggest would be the better way to convey this information?
For source code, I can see your point. Although in my experience source code has the benefit that you can be certain that it doesn't lie. It does exactly what it says. Sometimes this can be a really nice benefit when trying to figure out what's going on. Regardless of the language or environment you are in.