This was actually a direct question and the entire motivation for Andrew's talk, which was to introduce the answer. The skew favors not just Googlers but the Go team at Google, and the answer is that despite asking people to discuss changes on the list we never actually explained what goes into a proposal and into a design doc. The new process is meant, first and foremost, to help teach that, which we hope will help the proposals be more complete and therefore more likely to be accepted.
Russ, if this is what the talk was answering, is that really the question? I think the simpler answer to the grandparent post is: yes.
The closing keynote to the second GoLang conference, where all the keynotes were Google employees?
Asking about Go's reliance on Google deserves a better answer than your Groups thread, or this HN post. I love your writing in the post you linked, but a technical response on process still doesn't say anything about Google The Corporation's influence. Would you try again?
The Go team had nothing to do with the organization of the conference. The gophercon organizers decided who got the keynotes.
From an external viewpoint, Google the corporation has two influences. Firstly, it employs the language leaders (i.e. Russ). Secondly, Google frequently hires/gives to the Go team experts in some particular field in order to improve the implementation. For example transforming GC for the 1.5 release or moving to SSA for 1.6. As a result, Google probably has a big influence on what the important tasks are to work on. That said, it is still open source, and so outside contributors can still work on what they consider important. For example, the shared library development was all external.
OK, let me try again. I read that post as asking about whether Rob's talk helped make proposals from outside Google more successful, not about Google's influence vs others.
I directly addressed Google's influence in my opening keynote; the text is at blog.golang.org/open-source. But I'll try to answer that interpretation of the question below too:
> Why is there a massive skew favoring Googlers having their proposals implemented compared to plebes?
> Did the stats change after rob's talk in 2012?
One reason is that Google pays for the bulk of the Go development, especially development that needs design (for example, the new garbage collector, or core language changes). This is changing slowly: we are seeing other companies begin to pay for Go development that makes Go better in some way for them. Canonical is paying people to add support for shared libraries, which matters for Linux distributions like Ubuntu. Intel is paying people to make Go take more advantage of Intel features; Oracle is paying someone to port Go to SPARC 64. I mentioned this in my keynote.
Another reason is that we weren't clear about the vision for Go. Rob's talk in 2012 was the first crisp statement of that vision. I hope to have elaborated a bit in my keynote. We're still trying to articulate that clearly. I do believe that non-Google proposals have been more in keeping with Go since then, but I don't have data. That talk was an important step by itself, but certainly not a sufficient one.
We believe a significant reason is that we have never properly explained what makes a complete proposal. If you look at golang.org/doc/contribute.html, it is very clear about how to send code but contains almost no text about how to send a proposal or design. We believe that being clearer about what makes a complete proposal and what makes a complete design doc will help non-Googler contributors be more successful at those. Of course, we will have to wait and see what the actual effect is.
This was the point of Andrew's talk: to recap the history of how changes to Go have been proposed and made, and to admit that we realize we haven't been doing a good job at supporting non-Google proposals and are trying to address that, both by being clearer about what a successful proposal look like and by establishing a timely process for answering them and an historical record.
The focus here is the _success_ rate of non-Google proposals. The balance of Google vs non-Google proposals, which you seem to be asking about, depends mainly on how much development is being funded by non-Google companies or individuals. Having a clear process may help increase that, but it would be a secondary effect.
The skeptic in me points at android and deduces that the whole focus of Google open source is skewing ecosystems towards google revenue and or hiring models.
The focus of Go's open source project is making a language that works well for the kind of networked software Google and many other companies write, roughly "cloud software". I am not sure what ecosystem that would skew toward Google revenue: maybe cloud providers, but Go works just as well on Amazon and Azure and Heroku as it does on Google's products. I am not sure what "hiring models" means, but I've literally never thought about anything related to the hiring process while working on Go.
For more about Go and open source, see my opening gophercon keynote, at blog.golang.org/open-source.
Sorry, I see why that was unclear. Android is a different example of the behavior OP was observing:
Why is there a massive skew favoring Googlers having their proposals implemented compared to plebes?
The AOSP acceptance decisions are made by Google employees. You are free to fork android, but Google favors features that are aligned with their objectives for the platform. See this thread for a tangible example: https://news.ycombinator.com/item?id=8803118
Just that groups of different people with different thinking styles are able to come up with solutions to problems faster than a team of people who think alike or are beaten into group think by a strong central authority.
It really follows the pattern of biological diversity in ecosystems: a diverse ecosystem is able to thrive and overcome troubles more than a non-diverse one, which is actually quite fragile.
But a 'diverse' team is code for a team with both black and white people, men and women, straight and gay, etc.
So when you say that 'groups of different people with different thinking styles' you are implying that, for example, black people think differently from white people for the sole reason that they are black and other people are white. Or in other words, that if you get a random black person off the street you can assume his views will be different from a white person's?
Isn't that kind of grouping and assumption exactly what we don't want to promote? I can't reconcile this in my mind.
Anonymous because I don't want to be shamed as a racist when I'm not, but a genuine question.
>> you are implying that, for example, black people think differently from white people for the sole reason that they are black and other people are white.
Not at all. Instead, different people coming from different backgrounds tend to have different life experiences and perspectives, which influence how they approach the world and how they solve problems. It's not inherent to their race or gender, but merely a fact of having lived different lives.
If you get a bunch of people from different backgrounds and cultures then you'll have a better pool to draw new ideas and creative thinking from. Much better than a room full of ~22yo middle-class white guys.
The idea isn't that you make a pokemon team of people of all skin colors, sexualities and genders to have all damage types covered. It's more that cultural background and experiences made shape thinking, problem-solving approaches, etc, and if your team is 100% straight white affluent men, chances are pretty good they will all have roughly overlapping backgrounds and leave the vast majority of the human experience uncovered.
I don't get it - either it's right to look at the color of someone's skin and make an assumption about their background, or it isn't.
You could hire both a rich white guy and a poor black guy, in order to get diversity, and then find out they both love classical music, basketball, knitting, went to the same college and vote for the same political party.
If you're telling me you can look at the color of someone's skin and tell me what their cultural beliefs and background is then you're the crazy racist judgemental one!
I'd say you'd have better luck predicting the weather with a dartboard. But knowing a person's skin color or cultural heritage or gender etc would let you make better predictions, that's a fact.
For building diverse teams I think selecting people based on their skin color would be pretty dumb. Rather pay attention to their background. But you should be putting their skills first in either case.
I said nothing about gender and ethnicity. Diversity is a concept that goes beyond that. You can be a white straight male and still be a weirdo, or a black Hispanic lesbian and still be a conformist.
Right, but people say things like 'all the speakers at this conference are white men - that's not diverse enough'.
If you are determining diversity by sex and skin color, I'm saying that you are making judgements about people's background, tastes, culture, etc that are racist and sexist.
You could also say all the speakers at this conference are too young, too old, too corporate, too hipster, too southern, too Yankee, ....
You can't really create diversity directly (people feel comfortable around those like themselves), instead it happens organically because people turn out to be naturally different, BUT you can definitely act against it through bias based on gender, ethnicity, age, looks...
The goal is to let diversity thrive through environments and hiring practices that are free of unnecessary bias, but old habits take a long time to break.
But natural ecosystems have a very stringent natural selection operating, so you always end up with diversity that represents an objective improvement, something that isn't guaranteed when diversity is driven by a company's philosophical notion of justice.
Go was made in China over 2500 years ago at which point it was called yì (弈). Then some guy had to name his programming language Go and make "Go programming" hard to search for.
I have a question about the design of the garbage collector of Go. Does it also clean up goroutines that are waiting on a "forgotten" channel? And does this apply for processes waiting for each other in a cycle?
Start with an existing language (Brand X). Reimplement it almost identically with modern-looking syntax. Improve and worsen it with some change. Give it a new name. Apparently a method that works:
In all seriousness, though, they could've just independently rediscovered a lot of wise design decisions. Strange if they'd never seen this one studying programming language history, though. I'm with the author wishing they applied the innovations that have happened in recent decades or even new ones. Julia is an example of an imperative one that tries to do that.
Thanks for the talk as it was interesting! A friend who pushes Go on other forums talked about how it reminded him of programming in his favorite, Wirth-style languages. Interesting that re-creating that experience was an explicit design goal with them drawing on good features of Wirth languages, ALGOL60 (not ALGOL68), and others. The ALGOL60 quote was similarly impressive:
"Hoare said of it, 'Here is a language so far ahead of its time, that it was not only an improvement on its predecessors, but also nearly all its successors.'"
Quite an achievement. It was had by using only experienced language designers on the committee. Modern committee-driven actions could learn from this technique of using people with brains and experience.
One of their smartest moves was how they marketed it. Many nice language designs happen but don't get any traction. Their design choices, tooling efforts, and big company support have been much better than the Java rollout and many minor attempts. My hope for the language is that it replaces the use of C# and Java in as many places as possible so people don't have to learn all that mess in maintenance phase. The simpler language should make for nice legacy code work, I think.
Not very accurate. Rob Pike pursued a unique line of PL design research from a systems perspective that was never integrated into mainstream PL design research. There is nothing particularly wrong with this, it is what happen when communities are isolated (they develop technologies independently, see the Americas vs. Europe/Asia before Columbus). The fact that the world is much smaller now (via globalization and the internet) means that diversity necessarily decreases to converge on an identified local optimum. Go is interesting since it rejects this pressure and explores an alternative path where PL technology could go, which seems to have paid off (it has use cases and has attracted users).
if you want to invoke the America vs Europe rift you'll probably be interested in an earlier talk from the same conference given by Robert Griesemer on the evolution of Go:
in short, Go's design is not from a unique line of programming languages, it just goes back further than you think. the argument made in the talk is that Go combines the European heritage of Algol (Pascal/Oberon) with the American one (C).
Sourcegraph's live blog is fantastic, but they do miss some nuance. Robert prefaced this by saying twice that you really really should not read too much into this observation about America vs Europe. He just found it interesting that the two lines branching off of Algol-60 do seem to have come back together in an interesting way in Go.
Well, natives of America actually crossed over from Asia in multiple waves, bringing mainland technology over available at the time while there was a land bridge! So in a sense, the analogy is still quite apt.
I still can't figure out what Go is supposed to bring to the table that Erlang doesn't. From [0], CSP mostly differs from Actors in that multiple actors can listen on the same channel? Is that a unique line of PL research? Or was it just a success to replace Erlang's funny syntax and minimal type system with a C-inspired syntax and an embarrasingly bad type system? I guess my real question is, what does Go do better than Erlang?
As someone who has shipped both Erlang and Go in production, I feel I can somewhat answer this.
Teams. Go does "teams" better. Shipping an Erlang app to production was simultaneously one of the happiest and most depressing points in my career.
Happiest because it did exactly what we intended with minimal code and it was done early and under budget (yey!). The initial group liked Erlang, already knew Erlang and simply were able to bang it out -- delightful!
Depressing because of everything that happened after... trying to teach DevOps crew was a nightmare, they HATED us -- all that introspection you can get on an Erlang system only makes sense after lots of experience. Trying to hire people who already knew Erlang was borderline impossible. Worse still, hiring people who didn't know Erlang and trying to train them up was swimming upstream in multiple ways -- it just didn't work. Erlang is really a bit of its own thing -- and if you are absolutely new to it, very hard to get over the hump... it is a weird pragmatic FP actor system and lacks any sense of purity or type niftyness.
... For initial development, Go was less elegant than Erlang, but due to simplicity and really solid tooling probably slightly more productive, but not so much that I would favor it over Erlang. But the magic happened when we involved other people. Handing off to DevOps was a DREAM... it was just a binary that logged stuff. They didn't need to know anything about what was in the black box. Hiring people is already relatively easy for Go, and training up developers in it is a AMAZING. You can hire basically any competent developer and a week later you have a competent Go developer... 50 page spec and dead simple language works wonders. No debates about code formatting, builds static binaries in seconds, super-easy to experiment with from minute one (versus getting your head around OTP).
I consider myself a programming language geek, I program in dozens and dozens of languages, and love building my own little DSLs. Go is -- breathtakingly boring as a language, "exceptionally unexceptional". Honestly, it isn't much fun at all to program in, it is a bit of a grind. However, Go as a tool (when the language isn't the point, the product is, your customers are) is amazing... it is viciously focused on getting things done on teams, and so far, it has been an incredible asset to our team.
I still follow and toy with Elixir in hopes that it will hit critical mass, but wasn't willing to bet my livelihood on it as a founder.
The differences aren't that huge, maybe the biggest ones are FP orientation, and static typing. That and Go is relatively simple. Thee tooling for Go is excellent, and the network effect of that is huge.
People like to complain about the type system or genetics, but it doesn't seem to be a big problem in practice.
Design by consensus generally produces inferior results. The first version of the most noteworthy innovations are often the result of a single person's effort. But it's also evident big projects are the results of teams. You can't produce something big without a team.
So what makes me uncomfortable is that I don't see one of the inventors of Go having produced a rough version one before pulling other people in (but to be fair, I simply don't know).
I'm more suspicious about the first Go program. Was computing a prime sieve the number one problem the Go creators had? Doubtful. When C was being written, the authors were also writing an application they deeply needed: UNIX.
Finally, the thought of making something "serious" spoils the general hack flavor of innovative solutions. It's like a beacon that ideas are about to start being policed, which sounds like the opposite effect consensus intends to produce. It may be a good idea to keep a project non-serious as long as possible instead of optimizing it prematurely.
Thanks for elaborating. The interpretations you're using are not what came across in the talk. The notes are just notes, so some of the nuance is lost. I'm told there will be videos in two weeks.
Design by consensus is different than design by committee. In the latter, there is horse-trading and the like, so that basically everyone's ideas go in. In contrast, design by consensus means that nothing went into Go until all three - Rob, Robert, and Ken - agreed it was right. That's actually a higher bar than design by just one person.
The Go spec came before the Go implementation. That's actually important too - it's not an implementation-defined language - and the prime sieve was written as a complete example program for the spec. It's still a decent example of what you can do with just plain Go, without any libraries. Go was meant for networked servers, but a complete HTTP server would have been way too large (remember, there were no libraries yet!).
Go certainly had a target: the kinds of networked servers that Google builds. The first real Go programs were exactly those servers. But it takes a lot of code to build up to that. For much the same reason, you can bet that UNIX was not the very first C program; something like hello world (or maybe just 'exit 0') probably was. Judging Go by the prime sieve is like judging C by "hello world". Obviously greeting the world was not the number one problem the C creators had.
In the context of the talk, "serious" does not mean what you are saying. Around July 2008 the first draft of the current compiler and runtime started working on both Linux and OS X, but nearly all of the standard library, many key language features, and much reworking of the existing things were yet to come. Serious does not mean frozen here. It means that Go started being a focus of active development for more than just the original three authors. (In particular, Ian Taylor and I joined the team, and development accelerated quite a bit.)
Stepping back, the focus of Andrew's talk was how changes to Go were proposed and made, and how that process has changed over time. For more about the actual design, see Robert's talk, the Evolution of Go (https://sourcegraph.com/blog/live/gophercon2015/123645585015). And again the video will contain more nuance than the notes.
Actually, he adapted BCPL by changing a bit the syntax and taking a few things out to fit into the PDP, but even the manuals were almost a copy from the BCPL ones.
> Go began with Robert Griesemer, Rob Pike, and Ken Thompson. “What should a modern programming language look like” (the story goes as they were waiting for some C++ to compile).
Cute. Nice to keep a dig[1] going. Since that horse has been kicked quite a few times over the years, how about those in the Go community address these language criticisms:
For specific examples from the aforementioned link:
The "correct" way to build generic data structures
in Go is to cast things to the top type and then
put them in the data structure. This is how Java
used to work, circa 2004.
And:
Go has the null pointer (nil). I consider it a
shame whenever a new language, tabula rasa,
chooses to re-implement this unnecessary
bug-inducing feature.
Of which the latter has been regretted by its inventor[2] for years.
"It is an extremely common occurrence for a programmer to accidentally forget to account for the possibility that a pointer may be null, potentially leading to (at best) crashes and (at worst) exploitable vulnerabilities."
Null pointers won't result in exploitable vulnerabilities in Go (not counting straightforward DoS-by-causing-the-program-to-panic). They might, in very rare cases, result in something exploitable in C or C++, because dereference of null is undefined behavior and the compiler is free to optimize accordingly. But in Golang the language semantics require the runtime to panic on nil dereference, so that won't result in something like RCE as it technically could in C or C++ (but isn't likely to).
Attempting to dereference a null pointers may not violate memory safety, but it can do much worse things, like crash programs in the middle of unfinished transactions. If the atomicity guarantees are provided by someone else, then it's not much of a problem - it's their responsibility to clean up. However, if you're the one implementing the atomicity guarantees, you're pretty much screwed.
> But in Golang the language semantics require the runtime to panic on nil dereference, so that won't result in something like RCE as it technically could in C or C++ (but isn't likely to).
While you thoughtfully address the concerns Mr. Yager presents regarding the potential result of using a null pointer in Go, IMHO the more relevant fact is that Mr. Hoare identified in his own words a mistake he made _fifty years ago_ (1965) by introducing "null."
The fact that Mr. Gerrand stated in the submitted article:
“What should a modern programming language look like”
Begs the questions:
Why is this mistake oft addressed in _other_ languages repeated in Go?
What are the benefits of _intentionally_ introducing this boundary condition into the language?
Put bluntly, there is no benefit to dragging 0x0 forward in "a modern programming language." The lack of an answer to a function/method call is trivially encoded with some form of an option[1] type.
nil dereference is only really critical if it can result in security issues like in C and C++. After that there are a lot of class of errors that are possible even in stronger types systems like in Haskell, nil dereference is only one of them.
The question ultimately is what is the shortest path to developing a functioning and correct program ? Compilers can be very smart and help you catch a lot of errors but they can also introduce friction if they become too slow. Everyone has different opinions about the "best" because we have different ways of getting to the end-result. Programming is a discovery process and some people prefer to think a lot upfront, some like to experiment and probably a different mix of both for every one of us.
All this to say that nil in Go is not an absolute mistake. It's just a trade-off like everything else. Since I took Haskell as an example, it's also much much slower to compile.
NULL's ability to create security errors in C/C++ is a relatively recent practical concern. (The theory that permits them goes back a long ways, but only recently have the optimizing compilers gone so crazy with unspecified behaviors.) Our concerns with NULL being a bad idea go back farther than that.
In my experience, unless you enable all the crazy type system extensions (which isn't a terribly good idea IMO), GHC doesn't really spend that much time typechecking programs. Most of the time is spent on optimizations, which are (0) possible thanks to purity [hooray!], (1) necessary to overcome the limitations of laziness [dangit!].
In any case, Haskell supports exploratory programming just fine. It's true that Haskell's definition of "exploring" (refine your design until you can convice GHCi and QuickCheck that it makes sense) is somewhat idiosyncratic from a Lisper or Smalltalker's point of view (refine your design until you can convince yourself that it makes sense), but that doesn't make it any less legitimate. It's still experimenting, except now you have a tool to validate the degree to which the experiment has been a success. It's experimental science, not experimental art.
---
On the specific topic of nils, I think it's fair to say they're an absolute mistake in a statically typed language first released as late as in 2007. Standard ML and OCaml, both much older languages, have parametric polymorphism, algebraic data types, freedom from nils, and fast compilers: Poly/ML and OCaml's bytecode compiler, respectively. In addition, both Standad ML and OCaml have unrestricted imperative facilities, so freedom from nils is perfectly possible in an imperative language.
---
Regarding the classes of errors that are still possible in type systems like Haskell's, I have found only one that really matters, and where the usual workarounds (e.g., phantom types) don't help. Haskell, being a mathematically-inspired language, favors programming in terms of "timeless" entities whose existence is unaffected by the number of times you use them. While appropriate for manipulating pure data (numbers, strings, lists, trees, etc.), it's awkward for manipulating ephemeral computational resources, such as file descriptors or open sessions.
A language that fares better in this regard is Rust. Rust has parametric polymorphism, algebraic data types, freedom from nils (are we seeing a common theme?) and type classes, but gives up on interactive programming in exchange for precise control over data structure representation and resource management. Rust's type system design understands very well the ephemeral nature of objects: When a non-`Copy` object is passed around elsewhere, it's truly gone from the current lexical context.
---
Now, I don't want to give the impression that I think verification is the be all and end all of computer programming. There's plenty of room for languages that let you develop "fast and loose" solutions, as opposed to follow some principled philosophy. (Ruby comes to mind.) However, Go's very own marketing suggests that this isn't what they're trying to achieve. They do want a language that enforces a programming discipline. They just happen to have designed a language whose discipline doesn't provide enough benefits to justify the freedom restriction.
One thing I would like to observe is that all the languages that you are citing that don't implement nil also implement patter-matching. All have something equivalent to the Haskell Maybe type which needs to unwrap the value upon utilization.
I would be open to admit that nil is an absolute mistake if you can demonstrate that unwrapping is as convenient as just checking for nil and then using the value (or present another mechanism that is as convenient).
In regards to possible class of errors in Haskell I can think of a few: any non-terminating program, any program that blows up the memory. It's possible to implement a Monad that doesn't obey the monadic laws. It's possible to inverse the order of arguments inadvertently if they have the same type. Any insufficiently specified type, like using Int when only a range of Int is acceptable or any String when only a URL is acceptable. I'm sure there are more ;)
> if you can demonstrate that unwrapping is as convenient as just checking for nil and then using the value.
If you want to build a largefail at any point computation that may abort at any point in the middle, you can use the MaybeT monad transformer. Or EitherT, if you want to supply an error message when you fail. Inside a `do` block, it looks just the same as imperative code. So no syntactic convenience is lost.
But the real benefit, at least in my opinion, is the ability to design APIs that allow less room for error. In the vast majority of cases, pointers and references simply aren't intended to be nullable in the first place.
> It's possible to implement a Monad that doesn't obey the monadic laws.
And it's possible to overload operators in C++ in ways that make no sense whatsoever as well. (Using the same operator for bit shifting and stream I/O, seriously?) At least Haskell has a culture of associating laws to type classes.
> It's possible to inverse[sic] the order of arguments inadvertently if they have the same type.
No worse than any other language. If anything, Haskell's type system makes more distinctions, so this specific error is less likely.
> Any insufficiently specified type, like using Int when only a range of Int is acceptable or any String when only a URL is acceptable.
Idiomatic Haskell would make custom types, with smart constructors if necessary. Using String when a URL is expected is simply bad API design in Haskell.
---
But, anyway, I'd rather not continue this subthread, since the topic is Go.
He is a person who wrote a lucid description of what many, myself included, see as fundamental issues with Go. Yet he does so without anthropomorphizing it. Reading the first paragraph reveals:
"I like Go. I use it for a number of things (including this blog, at the time of writing). Go is useful. With that said, Go is not a good language. It's not bad; it's just not good."
Since Mr. Gerrand decided to take cheap shots at other languages, I felt it best to present the community with genuine criticisms of the Go language which were _not_ expressed in that style.
> Since Mr. Gerrand decided to take cheap shots at other languages
I don't think I was taking any kind of "cheap shot." It is literally the origin story of Go. Google has a few unreasonably large C++ projects and the incredible build infrastructure to make that possible. It was in part Rob's dissatisfaction with this state of affairs that triggered the first discussions of Go.
I believe Mr. Yager's point was not so much about Java generics as it was that determining the type of a container's contents was impossible without manual type checking. I am not he so take that with a grain of salt.
What makes opaque containers a particular form of debug hell is when working in a non-trivial code base. Lobotomizing the one assistant which never tires (the compiler) puts the onus on people to "just know" what is in there (for whatever definition of "there" is at the moment).
Call me cynical, but people are horrible at keeping these kinds of concerns straight.
The built-in types slice and map are type-safe and cover 90% of container needs. If you want to use a non-builtin container such as container/heap and you find yourself casting too often, write one or two little wrapper functions for your use case, which encapsulate your casts. This is also an opportunity to make the wrapper represent the problem domain better.
I don't believe anyone from the Go team has said that using interface{} in containers is "the "correct" way to build generic data structures". It is merely one of the ways available today. (The other is to define algorithms in terms of operations, like sort.Sort does with sort.Interface.) We're not saying either of those are "correct" in the sense of optimal, but they're what we have now.
> Generics is the single biggest language feature absent in Go. It’s often missed by newcomers to Go. But it’s more of a type-system mechanism. It’s unclear if it’s an essential language feature.
> Generics are incredibly complex in both semantics and implementation. There are considerable trade-offs to consider, such as do you want a larger binary vs. slower binary vs. larger source code.
> It’s also very non-orthogonal. It interacts with and complicates a lot of other language features.
> They have tried and implemented prototypes of generics in Go, but they haven’t found a good solution, so they’ve decided to hold off and keep thinking about it for the time being.
Well there are some rare cases where you really need generic data structures, is not hugely annoying to use go generate and a template implementation. That being said, I'd still like to see generics in Go, providing the drawbacks are limited.
The other criticism about null pointers is just ridiculous. If null pointers make in into a list of your top ten pain points while developing software in a team, I'm going to call you out for not being honest to yourself. Null pointers have their benefits and drawbacks, but it's so far down the list of practical concerns that it shouldn't even feature in your decision on what language to use.
Does identfying fundamental deficiencies in a language now qualify as "bike shedding?"
> Well there are some rare cases where you really need generic data structures, is not hugely annoying to use go generate and a template implementation.
I don't understand how you draw this conclusion, as homogeneous collections are one of the most used forms of collections I've seen for many years. For an example of the precise issue losing the type within a container causes, see this example[1] of where a Go developer was struggling with an issue that is non-existent when types are retained.
As for using an external tool to do "template generation", XDoclet[2] did that well over a decade ago. It was a PiTA then and now.
> If null pointers make in into a list of your top ten pain points while developing software in a team, I'm going to call you out for not being honest to yourself.
They certainly do, as the number of people involved in working in a system increases, the assumptions regarding subsystem interaction vary. By having the ability to create a "software singularity" at any point by producing a null, everything will have to be _manually_ checked at the invocation site. Just like what had to be done in C for decades.
Can you guarantee that everyone involved in the lifetime of a project will always use the same approach, have the same assumptions about how the behavioural contracts are fulfilled, and will _always_ check everything that needs to be ensured "non-null" at the call sites?
With option types[3], or even better with disjoint union types[4] where the type contains either an error _or_ the desired result, this simply isn't a concern. The developer _must_ deal with the possibility of maybe not getting something back from a call (or of getting an error _instead of_ the desired result).
Why is there a massive skew favoring Googlers having their proposals implemented compared to plebes? Did the stats change after rob's talk in 2012?