First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
> Go code often passes references over channels, which results in shared memory between two threads with no locking
Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1]. Do you have an example of the code you're referring to?
[0] well, technically "at most once", since some files can be skipped entirely.
[1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
[2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
> (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.
> [1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
That trivially fails to solve the problem, due to aliasing.
> [2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
Rust's grammar is also context-free, as far as I know.
Your proposed static analysis is not reliable. The "more complex" type system in Rust exists precisely so that we can do more reliable static analysis.
Technically, byte-strings (and nested comments) mean the lexer is not regular, but the grammar is still context-free. But anyway, parsing is not typically the bottle-neck these days.
> Thanks to the module system, the Rust compiler never rereads a file more than once.
Are you sure about this? When you use a generic function accepting and/or returning values of type T, I guess the compiler has to generate a version of the function for each instance of T. But the compiler cannot know all the possible use of the generic function without having walked through the whole code first, which implies at least two passes. How does it work?
Ok, you're right, but we're playing with words here :) I understand the file containing the source code of the generic function is read only once, but I guess the AST of the generic function is processed many times, at least once for each instantiation of the function? And I'd venture in guessing that the biggest cost is in processing the AST and compiling it, not in reading/parsing the source?
Generating specialized code from a generic AST is in no way analogous to the exponential-time explosion that is the C++ header system, which is what Go is referencing when it says that it has been designed to read each source file only once. All languages with proper module systems have this property (which is to say, basically all languages that aren't C or C++).
Yes, designing a proper module system and getting rid of the header system is the best way to improve compilation time. But it was not my point. There are other factors impacting compilation time. Look at Scala for example: it has a proper module system, but compilation is still relatively slow, even if better than C++. This is the reason why I'm interested in learning how Rust compiles generic functions, and what is the impact on the compilation time (because the compiler has to generate a version of the function for each possible T), the binary size, and the pressure on the CPU cache (because having many versions of the function makes harder to keep all of them in the cache at the same time).
The Rust compiler actually has built-in diagnostics to let you profile the time that each phase of compilation takes. On mobile but I believe that it's `rustc -Z time-passes`, and whenever I use it on my own code 95% of the time consists of LLVM performing codegen. Nearly every other compiler pass takes approximately 0% of the compilation time (coherence checking being the notable exception, for reasons I have the discern).
It's not playing with words: the parser only runs once, and the processs/reprocessing AST is much faster than interpreting the original source multiple times (e.g. it can be indexed cheaply).
> > (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
> Same in Rust. Thanks to the module system, the Rust compiler never rereads a file more than once.
As I said right at the beginning of my post, I'm not trying to compare Rust and Go directly, because I don't think that's meaningful. I'm explaining why these particular features would be difficult to incorporate into Go. Note that I never said that Rust reads a file more than once, or that Rust's grammar is not context-free. In fact, the word "Rust" doesn't appear anywhere in my comment at all except in that very first paragraph.
I love talking about PLT and would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem, but I have to say, it's both frustrating and discouraging to post an in-depth response and then get downvoted twice, with the only reply being one which very clearly ignores the very first line of my entire response.
Bystander here, neither upvoted nor downvoted you, but I think one reason for the skepticism is that people have hard the "It shouldn't be hard to implement X in language Y..." refrain many times before, and until somebody actually does implement X in Y, it means nothing. "It shouldn't be hard to make Python at least as fast as V8." "It shouldn't be hard to implement static typing on top of Python 3 function annotations." "It shouldn't be hard to add lambdas to Java [well, they finally did with Java 8...which we still can't use on Android]".
If it's actually not that hard, go implement a checker for Go that does check that statically-allocated rvalues are never used again, and post it here. You'll probably shoot to the top of Hacker News, it'd be a nice open-source project for the resume, and it'd provide a very useful tool for the Go community.
> I would otherwise be interested in having a discussion about static analysis and hearing why you think it would not solve the problem
Stack allocation isn't in the semantics of Go, so that's a pretty weird thing to use a basis of a static analysis. It's also not sound because of interfaces or closures. You would want something more like "fully by-value data with no pointers in it, no interfaces, no closures".
You outlined why they shouldn't be compared because they are different and outlined differences. The response (from one of the rust devs, if I'm correct) explained how some of your purported differences weren't actually different. The response wasn't ignoring your first line, it was explaining how portions of your evidence backing up that first line were factually incorrect.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
It's perfectly reasonable to compare any two programming languages, especially where their use cases overlap.
> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
> They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
But everyone keep trying to though, that's kind of interesting.
The reason I am guessing is because Go billed itself as a "systems programming langauge". It did, go find the original announcement video (or was it a press release). That is what it said.
Later after replacing C idea didn't quite materialize. They clarified that they mean "systems programming language" to mean something else actually. Now if I said that or some other anonymous user, ok, fine, people get confused, don't know enough etc etc. But I don't believe Rob Pike doesn't know what a systems programming language is.
Anyway, I am not saying one way or another but just illustrating that everyone doing the comparison is not completely insane.
Now they way I see Go is more like a Python++ or Ruby++. All those nice concise languages, + concurrency, + speed, + some type safety, + easy static binary deployment. So I agree with you that Rust and Go should not be compared.
My perspective is that Go seems to be trying to aim at network aware services where Java might have been used before.
Internally what I've seen @ Google is it seems to be used here and there for services where Google teams have typically used C++ (but other companies might use Java.)
And yes, some places where Python has been used, Go makes a decent replacement.
Not my cup of tea, but I can see its niche. It's not the same niche as Rust. At least not right now.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases. Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
Almost all languages are different and thus have different design goals. That doesn't mean it should be off-limits to compare them.
> First, please don't compare Go and Rust. They are completely different languages with completely different target use cases.
Then why does everyone insist on comparing Rust and C++? Objectively Rust is much closer to Go than to C++.
That's just dumb. Rust and Go were, in fact, contemporarily designed to similar constraints and with similar goals. That they made significantly different design choices is an interesting point that should be discussed and not swept under the rug just to win internet points.
At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features. None of this is true for Go; Rust's similarities with Go are shared by many other languages as well, many of which are much more widely used than either (hence would probably represent a more informative comparison; e.g. I think posts contrasting Rust and Java would be quite useful, but I have seen very few of them). As such, Go and Rust comparisons tend not to be very illuminating.
>
At the language level, Rust is substantially more similar to C++ than it is to Go. C++ and Rust both have many properties (lack of GC, pervasive stack allocation [even for closures], move semantics, overhead-free C FFI compatibility) that many other languages lack, and the Rust developers actively work to match C++ on features.
I think this is absolutely true, for what it's worth.
The Rust on Rosetta code is exceedingly old. There's a community project to update the examples, but they're waiting until 1.0 to move upstream, so as not to cause them too much churn.
Rust is very similar to modern C++, just without backwards compatibility considerations, and with the (fantastic, IMO) additions of compiler-enforced ownership rules and generics instead of templates, which makes certain type checking work better.
What objective criteria are you using by which Rust is much closer to Go? Go being garbage collected and Rust/C++ not being seems like a much larger gap than anything between Rust and C++.
A managed heap; emphasis on the use of functional constructs; rejection of lots of previously-thought-to-be-essential OOP features; built-in concurrency based on a rejection of the traditional primitives; general de-emphasis of metaprogramming constructs like templates and macros; broad design emphasis on preventing common programming mistakes.
You're saying, I think, that the Go implementation and runtime looks more similar to Java and that C and Rust are clearly in the same family (in the sense of being only losely coupled to a runtime environment). That's the way a language implementer might look at the question, I guess. It's certainly not the only one.
I'm not sure what you mean by this, but when I hear "managed heap" I think of heap memory managed by a garbage collector. This is not a feature of Rust.
> emphasis on the use of functional constructs
Rob Pike wrote (unavoidably, because of the lack of generics) crippled versions of map and reduce for Go and declared that the almightly for loop was superior. I don't think an emphasis on the use of functional constructs follows from this.
> rejection of lots of previously-thought-to-be-essential OOP features
Yep, this is an important similarity between Go and Rust. Rust brings traits, which are (extremely) similar to Haskell's typeclasses, into the mix as well.
> built-in concurrency based on a rejection of the traditional primitives
Sure, but the differences between the languages is crucial here: Go builds safe(ish) concurrency primitives into the language; Rust does not, but provides language features powerful enough to build memory-safe concurrency primitives in the standard library. And, perhaps surprisingly, mutexen and the like are actually encouraged precisely because Rust can make them safer.
> general de-emphasis of metaprogramming constructs like templates and macros
I don't agree that Rust de-emphasizes these. Generic programming is strongly encouraged, and indeed Rust's generics are implemented very similarly to C++ templates. Rust also strongly encourages the use of macros for cleaning up repetitive blocks of code, both inside of and outside of function bodies. And very powerful metaprogramming is on the way (already available in nightlies) in the form of compiler plugins, which allow arbitrary code execution at compile time.
> broad design emphasis on preventing common programming mistakes.
I think this is a feature, or at least an intended one, of every higher-level programming language :-)
On the rejection-of-OOP thing; I agree that seems like the biggest similarity between Go and Rust, but even then, the major replacements for those features are completely different – Rust's traits are more like typeclasses and C++ templates (to some extent) than they are to Go's interfaces (though "trait objects" are like interfaces, but used less often). It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM (and I think others are also interested for different reasons), which I doubt Go will ever do.
> It also seems like Rust will eventually add some form of more traditional OOP features like inheritance, because the servo project would like such things to implement the DOM
There's been a great community and core team effort to design small, orthogonal language features (or extensions to existing features) that can be used to regain the performance + code reuse benefits one gets from using inheritance, without all the associated warts. The DOM/Servo problem is a tough one and it's going to be very interesting to see if Rust can solve it without resorting to the blunt instrument of inheritance.
Yeah I've followed the evolution of that discussion with interest (or I did until a few months ago, so I'm likely out of date), and came to the, perhaps incorrect, conclusions that it is likely there will be some solution added at some point, and it is likely to share some trade-offs with inheritance.
> Go has very different design goals than Rust, so very little that Rust does would actually be possible in Go, and vice versa.
Oh give this lame argument a rest already. They're both supposedly general-purpose programming languages. Rust shoots for a little lower level than Go, but they can certainly be compared, and the comparisons are valid.
> Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
I'm not sure how this is a defense of Go: yes, that's a primary design goal; it's also a terrible design goal. Trading an adequate type system for a short-term gain in compile time is an amateurish mistake.
Compilation time can be mitigated on even the largest projects by only rebuilding changed modules. If you're getting to the point that this isn't working, then maybe it's time to start working on reducing the size of your code.
I've worked on projects that were 500,000 lines of code and compile time was sometimes an issue, but not nearly as large an issue as bugs which would have been caught by an adequate type system (large C#/Java codebases pre-generics).
> As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
We've tried this approach before, many, many times, and the result has been high-profile bugs.
The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code? And if you're designing a language, why not design the language in such a way as to allow code enforcing best practices? Like, a type system?
> Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1].
Yes, just like it's pretty obvious that when you free memory you shouldn't write to it any more. We've never had any issues with that, now have we?
> It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
I didn't know about `go vet` and I wish I didn't.
So, to avoid a second pass over the code during compilation, you add a second pass over the code during `go vet` that has less capabilities. Good thinking. I'll add that onto the list of other stupid ideas that have been tried decades ago and didn't work but are finding new life in Go.
> Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.
Yes, adding syntax for static type checking will definitely make static analysis like type checking harder. Are you fucking kidding me?
> The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code?
Compiler shall not be an obstacle to a man. We tried enforcing things before, many times. Things got abandoned.
Sigh. We build machines to automate the repetitive, to eliminate the daily drudgery and to repeat steps with perfection (that we would repeat with perfection ourselves if only we were as focused as a machone). So why do we keep finding ourselves arguing that a lazy compiler, which offloads the work of a machine on to a dev team, is an acceptable compromise?
Meta-comment: I believe the difference in opinion here (which seems to recur, over and over, and has for decades) is because the job title of "software engineer" actually encompasses many different job duties. For some engineers, their job is to "make it work"; they do not care about the thousand cases where their code is buggy, they care about the one case where it solves a customer's problem that couldn't previously be solved. For other engineers, their job is to "make it work right"; they do not care about getting the software to work in the first place (which, at their organization, was probably solved years ago by someone who's now cashed out and sitting on a beach), they care about fixing all the cases where it doesn't work right, where the accumulated complexity of customer demands has led to bugs. The first is in charge of taking the software from zero to one; the second is in charge of taking the software from one to infinity.
For the former group, error checking just gets in their way. Their job is not to make the software perfect, it's only to make it satisfy one person's need, to replace something that previously wasn't computerized with something that was. Oftentimes, it's not even clear what that "something" is - it's pointless to write something that perfectly conforms to the spec if the spec is wrong. So they like languages like Python, Lisp, Ruby, Smalltalk, things that are highly dynamic and let you explore a design space quickly without getting in your way. These languages give you tools to do things; they don't give you tools to prevent you from doing things.
The second group works in larger teams, with larger requirements and greater complexity, and a significant part of their job description is dealing with bugs. If a significant part of the job description is dealing with bugs, it makes sense to use machines to automate checking for them. And so they like languages like Rust, C++, Haskell, Ocaml, occasionally Go or Java.
The two groups do very little work in common (indeed, most of the time they can't stand to work in the opposing culture), but they come together on programming message boards, which don't distinguish between the two roles, and hence we get endless debates.
My point was: tools that prevent you from doing things should not do that without explicit permission. Because thinking is hard and any interruption by a tool or a compiler will impose unnecessary cognitive load and will make it even harder, which may lead to a logical mistake. It is much better to deal with the compiler after all the thinking is done, not during.
I'm pretty sure you've never used a language with a good type system then.
You describe a system where you have to keep everything a program is doing that's relevant in your head at once, and when you're forced out of that state, it's catastrophic. You seem to be assuming that's the only way to get productive work done while programming. I happen to know it's not.
If a language has a sufficiently good type system, it's possible to use the compiler as a mental force multiplier. You no longer need to track everything in your head. You just keep track of minimal local concerns and write a first pass. The compiler tells you how that fails to work with surrounding subsystems, and you examine each point of interaction and make it work. There is no time when you need the entire system in your head. The compiler keeps track of the system as a whole, ensuring that each individual part fits together correctly. The end result is confidence in your changes without having to understand everything at once.
So why cram everything into your brain at once? Human brains are notoriously fallible. The more work you outsource to the compiler, the less work your brain has to do and the more effectively that work gets done.
Yes but tools that you from doing thing you would prefer not to have done in the furst place (but still grant you permission to override this when desired) would be a fairer assessment of what a sting compiler is.
We all agree that a null dereference is a bad thing at runtime. I see no advantage for me as a programmer to be allowed to introduce null dereferences into my code as a side effect of "getting things to work" if then when the code runs it doesn't work right. This increases my cognitive load as a programmer, it does not decrease it.
I would argue that you don't think about the compiler anymore when using a language like haskell than you do when using Python. But you do get more assurances about your program after ghc produces a binary than after Python has finished creating a .pyc -- and that is a win for the programmer.
Agreed. But every production language that I'm aware of has an out that allows you to escape its type system, with the exception of languages whose type systems are intended to uphold strong security properties and verification languages that feature decidable logics. I can't remember for sure, but I think even Coq--which is eminently not a language designed for everyday programming--may diverge if you explicitly opt out (though I could be wrong about that).
The questions, to my mind, are
1. How easy it is to opt out?
2. How often do you have to opt out?
3. How easy it is to write well-typed expressions?
4. What guarantees does a program being well-typed provide?
For example, you almost never have to opt out of the type system in a dynamic language, but the static type system is very basic. In a language like Rust, you opt out semi-frequently (unsafe isn't common but it's certainly used more often than, say, JNI), and it can be hard to type some valid programs, but opting out is simple and the type system provides very strong guarantees. In a language like C, you never have to opt out of the type system, and the annotation burden for a well-typed program is minimal, but the type system is unsound--being well-typed guarantees essentially nothing in C.
All languages fall somewhere along this spectrum, including Go. It's just a question of what tradeoffs you're willing to make.
I think it's worth clarifying that Rust's `unsafe` doesn't opt-out of the core type system per se, it allows the use of a few additional features that the compiler doesn't/can't check. I think this distinction is important because, as you say, `unsafe` isn't very uncommon and so it is nice that one still benefits from the main guarantees of Rust by default inside `unsafe`. :)
Sophisticated type systems have not been abandoned by any stretch of the imagination.
We tried static code generators before, as well as linters and static code analysis on untyped code, and those have pretty well been proven to be ineffective. All of which are supposedly "new innovations" in Go. So if you want to defend Go that's not an approach you can really take.
My understanding is that most languages are context free in their syntax, but that type checking and namespace "stuff" are (almost?) always context-sensitive.
> Go code often passes references over channels, which results in shared memory between two threads with no locking
Having a complex type system would cut into compile times (an explicit primary design goal, to the point where the Go compiler must be written to read each file exactly once[0], no more).
As for the code you're referring to, Go is not designed to be a language which prevents you from shooting yourself in the foot with provable code. It's designed to be a language which makes it reasonably easy to be sure you haven't, as long as you follow the general idioms and best practices. What you're describing here is definitely not one - I can think of a few instances in which pointers might be reasonably passed over a channel, but they're few and far between.
Also, I would think it's pretty obvious that once you've sent something over a channel you shouldn't try and write to it anymore[1]. Do you have an example of the code you're referring to?
[0] well, technically "at most once", since some files can be skipped entirely.
[1] I'd need to think about this, but I don't think it'd be difficult to detect this statically[2] at compile-time and enforce that stack-allocated rvalues into channels are never used again in the same scope. It's definitely possible to extend `go vet` to handle this, and it may even be possible to write this in as a compiler error in a future version of Go.
[2] Incidentally, one of the reasons that it's so easy to do reliable static analysis on Go code (compared to other languages) is that the grammar is incredibly simple - it's almost entirely context-free, which is very rare among non-Lisps. Having a more complex type system usually requires at least some additional syntax to along with this, which means you'd have to start sacrificing this design goal as well in order to create a more elaborate type system.