back in the days, one always had to have 2 terminals open to work with ROOT: one to work and the other to 'kill -9 root.exe' thanks to CINT happily completely destroying your TTY.
IMHO, ROOT[3-5] is too many things with a lot of poorly designed API and most importantly a lack of separation between ROOT-the-library and ROOT-the-program (lots of globals and assumptions that ROOT-the-program is how people should use it).
ROOT 6 started to correct some of these things, but it takes time (and IMHO, they are buying too much into llvm and clang, increasing even more the build times and worsening the hackability of ROOT as a project)
Also, for the longest time, the I/O format wasn't very well documented, with only 1 implementation.
Now, thanks to groot [1], uproot (that was developed building on the work from groot) and others (freehep, openscientist, ...), it's to read/write ROOT data w/o bringing the whole TWorld.
Interoperability. For data, I'd say it's very much paramount in my book to have some hope to be able to read back that unique data in 20, 30, ... years down the line.
TL;DR: Go is great b/c it brings great s/w engineering practices and a s/w engineering-friendly environment to scientists.
Admittedly, generics will change how packages are written.
So some code churn will take place when/if they land, but the Go community learned the lessons from Python2/3 and Perl5/6. Expect a better migration path.
Lastly, I guess the 2 remaining weak points of Go are:
- runtime performances sub-par wrt C++ or Rust
- GUIs (which may or may not fall into "interactive visualization")
That said, the Go community worked on a Go kernel for Jupyter:
Gonum is neat, but to the previously-made point about Go's type system making stuff more painful than it needs to be in this application: gonum's linear algebra is defined over float64 and int, which is problematic if you need arbitrary precision.
These are things that are great for product development and devops and not in fact all that valuable in scientific computing, which is a reason why so much of it gets done in Python.
> These are things that are great for product development and devops and not in fact all that valuable in scientific computing
I disagree. Again, this may very well be science-domain dependent, but in High Energy Physics (where, finally, Python is recognized as a mainstream language, BTW) many -- if not all -- of the pain points that slow down undergrads, PhDs, post-docs and researchers at large, are these Go features.
yes, the time from "idea" to "little script that shows a bunch of plots" on a subset of the overall data is definitely shorter in Python (exploratory analysis is really really great in Python).
but, at least for LHC analyses, python doesn't cut it when it comes to Tb of data to swift through, distribution over the grid/cloud, ... you die by a thousand cuts.
and that's when you are alone on one little script.
LHC analyses can see well over 10-20 people assembling a bunch of modules and scripts.
You really long for a more robust to (automated) refactoring language than python, pretty quickly.
I'm using this exactly. It's not the nicest interface but it supports almost everything that LLVM does.
https://github.com/llir/llvm looks nice but I don't think you can interface with LLVM from there, which is needed if you want to do more advanced stuff.
I poked around with writing a Go interpreter a while back. There are a number of issues that make it practically infeasible. You can get some hacked-up stuff off of GitHub, but those hacked up things are pretty much the best you can do right now.
But as per my other thread in this thread, if the scientific community becomes big enough I wouldn't be surprised they fork Go entirely, at which point that opens up a lot more options.