TLDR; the author likes rust and wanted to use it. The article reads like some dev's rationalizing what they want to do to the management. These types of things are fine, but as a dev to a dev it is obvious that they just want to use this cool tech. Good for them.
Even more, Go is excluded as (author's opinion) goleveldb is not as mature as RocksDB. Thus they should have used CGo, which is way suboptimal, slower etc.
The title should have been: "C++11 or Rust? We chose Rust". In a greenfield project like this one seems to be, it's I choice I would approve.
A personal note regarding future comments to this thread: I have had enough of negative advertising against Go in every language related thread. People who use Go are not stupid: they know the language limits and tradeoffs and are okay with them. Deal with it.
Re your last part - is that fair? It's very common that programmers have to end up maintaining code written in languages they don't like, due to the original authors moving on, lack of better jobs in their area, whatever.
People advocate against Go partly because they'd rather not have to work on Go codebases in future.
Well I don't want to work on C++ or Rust in the future should I go into every thread about it and slam them for their shortcomings? Should I call every dev that uses them stupid? That is what he was talking about. You can not like the language, but don't call people who do like it and use it idiots. There are better ways to get your points across.
People do! But not as much with C++ because it's very rare to find a C++ developer who is genuinely in love with the language and is unaware of its faults. Not many people starting new C++ codebases today unless they feel they have no credible alternative choice.
To a lesser extent the same is true of Rustaceans - they tend to be well aware of the constraints of their language, and won't try to defend it or pump it outside of some specific use cases. Nor do the Rust designers who are fairly humble.
Nobody should be calling Go users stupid. But they typically don't. At least not here. The language itself, on the other hand ...
Welcome to HN lol. You’ve got it easy. Try being a front end dev, because as HN will tell you, none of us know anything about computer science, which is why we are wasting our lives making over engineered bloated shit. And we have a obsession with novelty, which is why all our technical decisions are based on what is currently fashionable.
The reason is people have no clue about Go most of the time and advocate against it because it's popular. Pretty much like Node ect... I mean look at Erlang derivates / Haskell / Rust, you don't see negativity in related language.
Or perhaps they're language geeks. Go is truly a horror show when it comes to programming language concepts. Everything about go is a complete special case. Select ... has it's own type system exceptions (and so many exceptions you might as well say Go simply has different type systems for all built in types, for all built in functions, and for several special forms). Make ... ditto. New ... ditto. "Go" same. Channels ? Effectively a separate type system. Dicts ? Same. And so on.
Everything is a compiler exception. Nothing makes sense. Nothing.
The absolute worst in Go, in my opinion, is the "range" function/special case. It is return-type polymorphic. That's right, it does different things depending on what you assign it's result to (not even C++ dared to go there). Not parameter polymorphic, return type polymorphic. "Assign" a range to one variable, does X, two variables, does Y, nothing ? does something else yet again, channel ? Again something else and all of these are special cases in the type system (same -but not quite the same, of course- with case, channels and dicts, by the way)
Needless to say, even though you have to assign ranges to things in Go, that's the only way to use it, that assignment does something entirely different from any other assignment in Go's type system.
Go is how to make a language extremely dumb and yet make have a type system that can't be (fully) described shorter than Haskell's.
I was never good at computer science-y "language theory" type stuff. I believe you when you say that Haskell or Forth are better languages - I just can't use them worth a damn.
Go is a practical language for me - it's not beautiful, or elegant, the type system is only 70% there, can be verbose, etc, etc, but it allows me to put my code in production and have a very high level of confidence that I won't get a phone call at 4 am. I don't have that confidence with Ruby, or Python, or Javascript.
I find that Go is very easy to pick up, and hard to really fuck up. Most people at my company can read my code and I can read theirs, which is mind-boggling to me. Go somehow achieved what I thought was impossible - to read and grok quickly other people's code.
Go sees was developed by "engineer" types - not "language nerds". And, yes, it shows.
FWIW, I also love Rust and C, but will choose Go over Rust any day when my productivity and dead-lines are more important than close-to-C performance.
I don't code at work for programming language concept, you see you should use the tools that make sense for the task you're working on. I'm not being paid to have fun on using an academic language that no one use, have bad tooling, not enough libraries to work with. You never wonder why pretty much no one uses Haskell / Erlang if it's that great? Having the best language concepts doesn't mean a thing in the real world. I try those things at home not at work where $$$ is at play.
He wasn't saying use Haskell instead. It was a comparison of the complexity of the type system vs power provided.
Most Go programs could have been written in Java, Scala, C# or Kotlin and been shorter, more predictable and probably faster. Certainly more maintainable (ok maybe not for Scala).
Range is hardly return-type polymorphic. It always returns the same things, you are just free to ignore them, i.e.
for k := range map { }
is the same thing as
for k, _ := range map { }
Would requiring the second version instead of the first actually be that much of an improvement?
The only true return-type polymorphism is type assertions, which is reasonable in my mind cause I don't think ignoring the "assertion failed" should ever be a logical thing to do.
I haven't used Go or Rust but when I compare them I have a gut feeling that I'd be better off with Rust primarily because Go's build tools are not adequate. What do you think about this?
Well, your question is simply too open-ended: nobody can really answer that for you, as you give no indication whatsoever as to your potential use-cases.
FWIW: I've been coding in Go for a few years now, for me and (perhaps more importantly) the kind of projects I choose to use it for, the build tools have been more than adequate.
As with many things: it ultimately depends on what you're wanting to do, and what your expectations are.
I've implemented some website backends, a ton of 'micro' services, various command-line tools, and a bunch of data-processing stuff. (The distinction between some of these is somewhat arbitrary)
These are the areas I think Go is most suited to, currently — and they've all been a breeze to implement/test/deploy/maintain.
What do you mean by not adequate? Go tooling is way better than Rust. The only thing that Rust does better in that field is package dependency and it's going to change soon for Go.
First, I want to say that I have great respect for you and everyone else in the Rust community. It's now my second-favorite language (after Go).
I've been learning Rust over the past months and I don't think the borrow checker stuff is as scary as people make it sound, match is amazing, so are traits, enums, cargo....
But, I can't, for the life of me, wrap my head around Rust's "concurrency" story. Yet.
Go concurrency is as close to trivial as one can get - channels, goroutines, select. That's it. And they compose nicely. I was doing concurrent networking stuff in a matter of days.
Rust has channels, select (kind-of?), but then I discovered it has futures, and something called "mio", and now something called "tokio", also read something about generators and coroutines, saw that C#-like "async/await" stuff is also being worked on, and I'm not sure how these things interact with each other.
I get the .NET async/await story, and always felt it was (good) syntactic sugar on top of "raw" JS-style futures, but how do all of them play with the CSP-style stuff like channels that's in the std. lib already? Is one being deprecated?
I feel in general Go's CSP model is more "powerful" and more "generic" than futures - you can even emulate futures/promises with Go's primitives, but I don't see how you can do what I do in Go with bare futures - "streaming" type stuff is especially hard with the futures' "all-or-nothing" approach.
I think Rust really nailed down the "one-thing-at-a-time" story - the docs, tutorials - all great. But for me it broke down quickly on the "easy concurrency" end - and maybe it's just a question of better tutorials/docs?
Yeah, so this is basically the result of two things:
1. Rust-the-language focused on making concurrency safe, and getting primtives right
2. The higher-level aspects of the story are still shaking out.
The core primitives are very simple: Sync and Send, two traits which let you declare invariants. But most people don't want to be programming with primitives. So there's been a lot of iteration on what a higher-level story looks like. There are a ton of options, and we've been iterating through them to figure it out. The stuff you're seeing is the result of that process, so you're seeing how the sausage is made, which can definitely be confusing. It's all under active work, and then, once the pieces are in place, getting the right docs/tutorials there.
So, yes. Thanks. It's a good reminder. We'll get there :)
One last thing, as a small amount of food for thought:
> I feel in general Go's CSP model is more "powerful" and more "generic" than futures
In some senses, yes, but they also come with tradeoffs, like any technical choice. Specifically, in order to do concurrency this way, you must go with green threads. And green threads lead to smaller stacks, which leads to non-zero cost of calling into C code, which is a drawback that Rust can't sustain. It works great for Go, but can't really work in Rust, generally. I mean, you can do it, yes, but that drawback doesn't work for the majority of our audience.
It's these kinds of tricky, in-the-weeds stuff that makes designs hard and take time.
All of that that is supported by default by the Go team on all platforms, meaning it works well and I don't have to worry of using a 3rd party Cargo package.
There is nothing out of the box for profiling ( but it's bit different since no GC and use C/C++ tools ).
What's the equivalent of doc godoc -http=:6060 ( it sounds silly by last time I was in a plane and was able to browse the Go doc Like if I had internet access )
I haven't used Rust yet but I can tell you that the go tools are very robust. It's a very good language if you stay within it's optimal scope and do everything by the book, go tends to punish quite hard developers that try to be "create" with project layouts and best code practices and so on.
Nonsense. I love go and rust but I would never develop a database in a garbage collected language again. Btw I have done it a few times. Go is better than java but it still
Can’t call into c as fast
Has micro gc pauses which effect performance
Doesn’t give me explicit control of the hardware
Go is great for middleware but he authors go it right
That's not a fair summary at all. For example, they explain why C++ was off the table. They also use Go for TiDB (which is the high-level distributed query engine layered on top of TiKV), but for TiKV they needed fast and easy access to C, which Go, in its current state, can never provide.
To be fair the language is very pleasant to work with if you're a beginner, because the compiler seems to have been built around the concept of giving the most helpful (and polite!) messages possible.
Never have I ever had a compiler tell me to "consider" changing some type declaration, because maybe, just maybe I was probably intending to do what it suggests all along.
Also generally things you didn't write still compile and run. This is not a given in e.g. C/C++.
> the compiler seems to have been built around the concept of giving the most helpful (and polite!) messages possible.
I can't confirm that. I have already ran into compiler bugs and very very unhelpful error messages and I haven't really done that much in the language.
There was another one, but I can't find the issue for it. I asked in IRC about it and it was fixed by adding &* before an expression. I didn't understand why ...
The &* thing is due to Deref coercions. The deference operator is a trait (Deref), which has an associated type that it dereferences to. This lets you do nice things, like dereference a Box<T> to just a T, and let the Box implementation figure out how to get you there. (It's also used on method calls using .)
What you have is a reference to a type with a Deref-coercion to the type you want, but the compiler won't coerce under the reference for you, so you have to dereference and then reference it again, and then type inference can work out that you wanted to reference the coerced type.
I mean, I've dropped in and out of the Ruby, Python, Javascript, Go, Clojure, Elixir etc communities and they all trying to be welcoming. It's kind of a given. It's rare to find the language community that isn't welcoming, and I think you'll find even that the most stubborn (lisp, Haskell) are very happy to have you and very happy to help new users.
You probably have an argument that they are doing something different and right, but how markedly different is it really?
I agree that most programming communities are friendly and open.
I can't help but notice that the same communities that have the reputation for being prickly are the ones that have the largest influx of people who come stomping in to the community, and one way or another try to turn the conversation to why everything the community is doing is wrong and stupid and how everything the community is doing needs to immediately be rewritten in accordance with their unambiguously correct opinions, even though they just joined the multi-year party ten minutes ago.
No, I do not have some particular community in mind that I'm coyly not specifying when I say that; it is a pattern I've observed across quite a few communities.
What kind of agenda could there be? Mozilla can't afford anything like as many people to push a language as Google can. Rust is pretty popular but the simplest explanation that fits the facts is that the language really is that good.
Here at HN Rust seems to come up frequently and it's indeed why I decided to take a look.
For my many colleagues who do not read HN, Rust is something that they might have heard of but haven't given any real attention to. For that matter, most haven't looked at golang either and they probably couldn't tell you what the difference between the two are.
IMO it will take a while before it grabs the attention of the mainstream software community.
I'm (very) old school, C is still my language of choice (though it could use some help). I like the syntax, it's pretty simple.
I tried playing with Rust and found the syntax to be off putting. I really wonder why each new language feels it is important to come up with a different syntax to say the same stuff. Go did a lot better than Rust in this respect, at least in my opinion.
It may be that appealing to C programmers isn't a thing any more, but if it is, then Rust could have done better. And, yes, I get that the syntax isn't the selling point of Rust, trust me, I get it. I just don't get why make people wade through some weird syntax when you don't need to.
When I look at Rust it seems to me to be very much C-inspired syntax. The code-blocks-return-a-value style makes it much easier to understand some conditional paths.
"Why would Rust change the syntax to say the same stuff?" Well, Rust is able to meet all of C's requirements while addressing one of C's big old warts: lexical analysis of C code is Very Hard. (e.g. if this particular build added "-DFOO" or "-Ibar/" to the command line then maybe there's a syntactical error or not).
This means that writing robust tools to analyze or modify C code is Very Hard. I don't know if this was an inspiration for Rust's syntax but regardless I'm thankful that they didn't try to reuse C syntax.
Most other successors to C (save perhaps C++) have taken on features that make them unable to write code that we have been able to write in C (bootloaders, OS kernels, interrupt routines, device drivers, etc). Teams had a really legitimate and mostly sane reason most of the time for saying "we couldn't possibly consider Java/Go/Python/etc because we won't be able to meet our product's latency requirements."
What is the biggest difference between C syntax and Rust syntax? I can only really think of how types are stated: in C they precede the variable, whereas in Rust you put a colon and then the type (but often you don't need to do this cuz of type inference).
This change is much better IMO because there's a clear delineation between types and variables.
The same idea applies to function signatures, which are much clearer to me in Rust than in C/C++.
Yes, precisely. C, for the most part (function pointers are an exception) has a pleasant syntax. Go didn't like semicolons and other stuff that it found unneeded, but mostly took a lot of syntax from C so it's easy to read Go code even without understanding the language.
Rust kind of went in a different direction. Why? Understanding C syntax is pretty basic, there are a lot of C like languages. Why not be another one?
Personally, if every language was just a C variant I'd be happier. It would feel like "OK, I've got the basics down, let's go explore this dialect that added garbage collection and strings as real types" rather than "I want strings as real types, darn it, I need to go learn this Tcl language". That might be an extreme way to make my point, but I think it's clear, right?
What about the people that don't like the C style syntax? I think there is room for many different types of syntax. Just stick to the style of languages you like and ignore the ones you don't. Pretty simple.
Yeah, exactly the right question. To me, C is like English, everyone has learned how to speak it. Creating some new syntax is a little like switching to a different language. I find that difficult and annoying.
Here's an example. When I was doing the GUI tools for BitKeeper 20 years ago I used Tcl because Tk was (and still is so far as I can tell) the best gui toolkit around. But Tcl? Holy moly, what a miserable language (with apologies, and respect to John O, it's miserable). So I got some compiler people to make a second, parallel, compiler that took what looked like C as input and compiled it down to Tcl byte codes.
Pingo! All the power of Tk but with a C-like language on top. And you can call the tcl library code and it can call you.
Syntax is a method to express ideas. Ideas at the core of a language should be easy to express with that language's syntax.
Some ideas are not possible in C as they are possible in Go/Rust/other languages. With the years, some ideas become more mainstream and then get integrated into the new languages that arise. These new languages need to express ideas that were not expressible in C, and thus may be better served by adopting different syntaxes.
In short: the ideas at your disposition and the ergonomic with which you can express them is a function of the syntax used to represent them. Different languages focus on different ideas, and so it follows that different syntax might be warranted.
“new and weird" is always relative to the industry/domain on application. In each domain , whether it be avionics or web development, managers are not going to want to use it until it has a proven record of success in that domain.
On the other hand I can't wait when Swift becomes general, non-Apple language, available on most platforms (including the most popular one) "with batteries" - that will be the end of Go and Rust I think :)
Will it? I don't think so, although I think Swift is great.
IMHO Rust targets another market compared to Swift. Rust feels more low-level, e.g. Swift uses Reference Counting for everything. Swift also can't give you some of the nice guarantees you get in Rust.
Go also has some nice stuff compared to Swift: simplicity, goroutines or tracing GC.
Go is also already widely used.
Swift is certainly great for Mac/iOS-Development, but I am somewhat skeptical for other platforms.
I am afraid that non-apple-platforms will always feel a bit like second-class.
It's just not Apple's main priority and most Swift-devs will always be paid by Apple.
But who knows, I may be wrong.
I don't think ARC is an issue here - it's just a deterministic reference counting - a la implicit using of smart pointers. Besides, Swift also has some unsafe pointer types as well.
Yes, you right according to "deterministic" but you forget about overhead. Rust gives you more safe control on memory management and in most cases, you can always write the fast and efficient safe program without the overhead. Rust give different smart pointers and even sometimes when you 100% understand what are you doing you can use unsafe raw ptr.
For certain programs a GC can basically make a programming language completely useless (no shared libraries with C abi, no realtime audio, videogames with stutter) regardless how fast the programming language is. This is not the case for swift. It's merely slower than rust.
The Rust "evangelists" version of system is most of the time the broader meaning of it as Rust do not exclude an entire class of systems as some languages out there.
apologies, but i don't quite follow. what do you refer to by 'class of systems'?
one common sentiment promulgated by rust evangelists is, for example, that go is 'not a systems language' because it has a GC. however, this definitional exclusion doesn't seem to align with (evidently broader) historical use of the term 'systems language'.
Real real-time embedded systems, low-level operating system components, etc. are some kind of things that GC languages exclude completely, so by definition, the subset of "system programming" set is eliminated from use with ie. Go but not from Rust. Consequently, Rust has wider class of systems which makes it a true system programming language as C and C++.
Simple put: Go ISN'T a true systems programming language because it simple does not work for every and ALL type of systems.
Yes, it may work with some type of systems, but this does not make it a systems language.
If you don't want to accept this fact, then you have bigger problems or you reality is narrow.
Another way to say it: The set of systems programming tasks for which Rust can be used is a superset of that for which Go can be used. Go cannot be used for real time code, as it does not provide latency guarantees. Go cannot be used for many embedded (microcontroller) systems as the runtime is required.
I've gotten Rust code to run on an ATSAMD21G18 Arm Cortex M0 (Adafruit Feather board). That board has 256K flash and 32K RAM. Go executables can't even fit in the program flash! Considering the sheer number of systems that have microcontrollers in them somewhere it would be very hard to call a language that doesn't support them a "systems" language. Maybe "applications language" would be more appropriate.
Seriously? Fallacy? That is really not wanting to understand things how they are. You should really go back to textbooks and try to understand hardware.
while I think that that would be a good thing (I like what happened with C#), I see the languages specified and controlled by Microsoft, Google and Apple as second-class languages, since they are often lacking in community input and are usually designed with certain platform-specific goals in mind instead of being cross-platform. (or company-strategic goals when it comes to Google)
I can understand conservative choices when it comes to picking a language but what you describe is a potential risk for plenty other existing languages as well as any new language too. Rust wouldn't be where it is today if it wasn't sponsored by Mozilla. Java was Sun Microsystems baby and I remember all the concerns at the time with the Sun vs Microsoft fighting over control of the ecosystem. Plus even many of the non-corporate languages are still heavily dependant on a small subset of maintainers.
At the end of the day if a language proves popular enough in other domains outside of the area which is directly controlled by the language maintainers, then the community will usually find a way of taking over maintenance of it.
We've seen this with Pascal, various BASIC dialects (including Visual Basic), and to an extent Java too. The problem with Objective-C was that - as far as I'm aware at least - it wasn't widely used outside of Apple / NeXT's ecosystem so if Apple deprecate support for Objective-C on their own platforms then there's little incentive for the community to keep using language (much like the problems with Visual Basic - which is why few know about it's open source forks). But languages like C# and Go are used massively across a multitude of domains so even if MS/Google were to kill them tomorrow, the community would almost certainly find a way to keep language alive. Heck, Go might even become more popular if that happened since many of the complaints against it are down to the highly opinionated approach of the current leadership.
Let's also not forget that Go and C# tooling are open source so the community wouldn't have to reinvent the wheel like they did with Delphi / Object Pascal and Visual Basic.
You mean like C and C++ being developed at AT&T, nowadays designed at ANSI, with people on ARM, Google, Apple, Blommberg, Sony, IBM, Microsoft's payroll?
Actually, C was a lot worse. For the first 9 years C was basically whatever AT&T wanted. In 78 K&R published The C Programming Language, creating a specification. In the next decade C was very much tied to Unix and non-Unix usages often had some oddities and interoperability issues. The last 3 decades have been good though.
You know... there's a difference between "(...) with people on ARM, Google, Apple, Blommberg, Sony, IBM, Microsoft's payroll?" and "people on Google payroll". In one case you have many companies cooperating, in the other - just one company.
I think what <pjmlp> is saying is that (1) these languages are too widely adopted to fail -- somebody is bound to take the ball and continue dribbling it even if the worst happens (company abandons language) and (2) to have a widely adopted language means you have to be a member of a number of committees -- because without standards you wouldn't become a widely adopted language in the first place.
In short, we're quite safe in terms of if C#, Swift and Go will live on. They will.
The issue with C# and Swift is that the Apple and MS dev communities enjoy having a ready made solution that they can pick up and work with immediately. Official support is very important.
C# and to some extent Swift are also two huge platforms, there are very few organisations out there that would be able to steer their development.
Finally, they are not standardised in any way. MS tried something with 2.0 and then gave up.
My impression is that these two live and die by the will of their corporate masters. I'm not saying they will kill them or anything, that would be pretty stupid of them to do.
If Google, Apple or Microsoft take their employees out of ANSI or from GCC/LLVM contributions, I can assure you they won't move much beyond the current state.
What are the odds that all big players would remove support for something that their platforms are built on? At the same time?
C++ is one of the healthiest languages in existance. It hits all the important checkboxes of standardisation, wide industry and platform support and large community.
One example that comes to mind is Go dependency management. For a long time (and still?) Go had no way to do versioned dependency. Obviously this is no problem for Google which reportedly build from HEAD, but this shows Google-bias in Go development.
All three have different goals and solve different problems. While you can compare things like "they both have MATCH statements" or "unions are safe", deeper they have nothing to do with each other.
You would be crazy to use swift (or objc) in real-time stuff. You would be crazy to write a UI-heavy app in rust. (note: I'm talking about high level DOM-like manipulation, not about rendering engines)
> You would be crazy to write a UI-heavy app in rust. (note: I'm talking about high level DOM-like manipulation, not about rendering engines)
I think the jury is still out on that one. I agree with the current state of things but the potential to ease and facilitate this is tremendous through the use of syntax extensions (procedural macros) which can dramatically simplify that use case. In general I think procedural macros add a _lot_ of versatility/flexibility to the language. I anticipate that there will be a huge boom in that area once they stabilize, and it will catch many people by surprise.
An example of the versatility that they enable is the work-in-progress async/await [0], whereas in other languages they would usually have to be implemented in the language itself. Note that this does not preclude their implementation in the language itself, but since it's a work-in-progress they're able to experiment with them without having to implement them in the language from the beginning.
I have to agree, things could change in the future. However, my guts tell me rust GUI libraries/frameworks won't be as great as say, Cocoa, although that's something I would love.
In the same lane of thought, I am in love with Go when it comes to microservices, network libraries/bridges and CLI tools but it's very ill-suited for web development. So I shrugged it off and only do web dev with Elixir.
Conversely, Elixir is awesome for a multitude of things but it absolutely can't compete with Go in its strong areas.
I think TiKV is a good example where team chose Rust over Modern C++. Rust gives the same performance and is close to metal like C when it is necessary. All possible memory management mistakes it catches at compile time if it's not "unsafe" and this is a really great!
C++ combines a lot of different paradigms m.b. more correct I would like to say "C++ paradigms hell"! Which of C++ subsets is the right way, no one understands. Even Bjarne Stroustrup said, "Within C++, there is a much smaller and cleaner language struggling to get out." - and where is this "smaller and cleaner language"?. What is the idiomatic style in C++? Is it Google guidelines, CoreCpp guidelines or other enormous guides?
I look inside a lot of C++ projects and each of them has different styles, use different paradigms, sometimes look like different languages!
Rust, Go, C, Java code bases look the same, they have their own idiomatic style, their own way.
I think Rust is the next step in the evolution of system programming language.
C and Java code bases certainly don't look the same. In fact, claiming that about C is simply insulting to the intellect of anyone reading your overenthusiastic message.
I also sincerely doubt that Rust code bases will look the same in ten years. It supports functional and OO paradigms and it's attracting very different classes of programmers. Recently someone wrote a post about difficulties with some OO concepts in Rust, and a top reply said that they never encountered such issues because they program in a functional way.
Go is an exception here, but as soon as they extend the language in a significant way (e.g. templates) differences will start to appear.
C++ doesn't have an idiomatic style because it's used in very different ways by different people. It's impossible to have a fixed style and address the mass market.
I don't think anything in Rust supports OO paradigm. There are instances where people ask for OO paradigm, but so far, there wasn't any movements towards it.
There is no inheritance and no polymorphism based on it. There is generics polymorphism, but nothing like that.
I think you misread a terribly misreadable statement: what he meant wans't that C and Java look the same, he meant that any two C codebases will share a lot of similarity, any two Java codebases will share a lot of similarity and so on. C++ on the other hand can be anything from "C with objects" to deep template metaprogramming to "has there even been a C++ before 2011?" styles, which are as dissimilar from each other as you can get without crossing language borders.
No, there's OO C, low-level C, GLib C, etc. There's Android Java, Enterprise Java, Standard Java. Any language catering to different customers will have various styles.
There are two big C++ coding styles: C with classes, an old style which has little use nowadays and modern C++, the recommended way, used in new projects. Asking "what is the idiomatic style in C++?" must be a rhetorical question, because it's obviously modern C++, the leaders of the C++ community have made this clear repeatedly.
Template metaprogramming is a technique, not a programming style.
OP should do more hacking without fear and less spreading FUD.
> After years of usage of GC, it is very hard to go back time for manually managing the memory.
... are you guys sure of your "experienced C++ developers" ? There's as much memory management in modern C++ than in GC'ed language: none. Create your objects with `make_unique` or `make_shared` according to what makes sense (or just enforce `make_shared` if you're really dubious of the coding abilities of your team but at this point you'll have problems whatever you do).
There is a host of distinctive differences between garbage collection and reference counting. Yes, both are memory handling strategies. That's where the similarities end.
"Just slap it in a shared pointer" is never a good advice without knowledge what 'it' is or in what kind of system it exists in.
> "Just slap it in a shared pointer" is never a good advice without knowledge what 'it' is or in what kind of system it exists in.
I agree, but it seems from their blogpost that they are not sure that their developers are able to handle the "mental overhead" of managing ownership, hence the simple solution of going for shared ownership everytime.
(btw, I re-read your post three times and could not find any hint of harshness !)
"No, reference counting is commonly seen as a specific implementation of garbage collection"
Shared pointers provide the reference counting, but reference counting alone hardly constitutes a garbage collector because it doesn't collect all of the garbage.
For example, shared_ptr:s alone do not automatically detect and collect cycles.
The dirty secret of the "garbage collection vs. manual memory management" war is that there isn't actually a bright shining line to be drawn anywhere; it's actually a relatively smooth continuum ranging on the one end from statically allocating all values up front (in the style of embedded system) to the dynamic languages on the other end, with dozens of stops in between.
Yeah, that read like some Java advertising from the 90s.
I wish more of these posts were honest and said "we picked X cause we think it's cool and we're gonna get paid to learn it".
But they have to make up some convoluted explanation that sounds rational and acceptable instead.
from the post , looks like they really do think picking rust is dope and already built a cool db system upon it. Maybe they already got bunch of bucks in the pocket. Huhhh..
But then your program uses slow, cache-unfriendly and much-reviled reference counting. I'd even prefer Ocaml and its fully-featured GC if C++ is only fast in artificial benchmarks and not in idiomatic code, which apparently must use RC.
Note: I haven't used C++ at all on any project larger than a single file.
> And do all the C++ libraries take only smart pointers as arguments and return only smart pointers as return values?
C++ libraries that would be as recent as Rust libraries would certainly take things by value or reference so there would be no problems. I honestly don't know libraries with raw pointers in their APIs that aren't from the 90's or before; and I don't think you want to use those in a current product anyways.
This is a little difficult on Windows or POSIX systems.
Also, as others have pointed out, you can smart-pointerise everything, but still have problems with common tree and graph structures. Rust is rigorous. C++ isn't; I'm not aware of any mainstream compilers which even have the option to make use of bare pointers or unsafe casting or undefined behaviour into compiler warnings/errors.
Not everything has a convenient wrapper, just the high profile stuff like files and GUIs. Are there wrappers for things like dlopen()? posix_madvise? All the various set... functions? Filesystem ACLs? COM objects?
(I'm something of an outlier here, maintaining a big legacy MFC application that targets Windows CE, but I can't be the only one. One implication of this is that I'm using the Microsoft MIPS compiler with this banner, that's probably older than some of the readers here and certainly predates C99:
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8804 for 80x86
Copyright (C) Microsoft Corp 1984-1998. All rights reserved.
... but my point is that it's not sensible to say nobody's using the system native APIs in 2017!)
> I'm something of an outlier here, maintaining a big legacy MFC application that targets Windows CE, but I can't be the only one.
The problem is not developing "legacy" apps, it's comparing the development and maintenance of "legacy" apps with apps that just get started being written today, for which the bare minimum is being cross-platform.
> Are there wrappers for things like dlopen()? posix_madvise?
none that I know of :( though MS has a fairly decent "modern C++" API that covers WinRT: https://github.com/Microsoft/cppwinrt but I don't think ACLs are even available in WinRT
> Boost do seem to be aiming for complete wrapper coverage.
I don't think "boost" is aiming at anything. If you have a good idea of a library (and a good implementation!) you can submit it to boost. It's more a big repository of libraries with a somewhat consistent coding style.
They don't, and it is (or should be) considered bad practice for functions, in general, to take/return smart pointers. Normally these should only be used as variables or class members; functions, on the other hand, should take/return raw pointers (except some special cases) or, better yet, references (except when there is a need to check for null value).
I disagree? Functions which allocate things returning unique_ptr is much clearer than returning a raw pointer. Clear ownership being passed, as opposed to maybe just having a view into some internal buffer.
I mean, if you only view functions as being called for side effects, then yeah maybe. But if you're constructing a data pipeline, unique_ptr in and out makes a lot of sense.
It's just a very loaded opinion, while you pretend it's a universal fact. I have never heard, nor experienced, any seasoned C++ developer claiming that smart pointers is a substitute for garbage collection.
Yes, smart pointers are nice, they alleviate you from a whole lot of manual memory management - but the programming model is still vastly different from what you would do in a traditional GC'ed language(such as Java or C#)
A lot of C++ big names have said that smart pointers are better than garbage collection for handling resources. And they are, because GC only handles memory.
Having memory leaks in modern C++ is a sign of not keeping up with the established idioms. Saying that experienced C++ programmers are worried about memory leaks is bizarre.
You're talking to someone who says they're an experienced painter. They show you some of their work. You reply "are you sure you're an 'experienced painter'?"
After answering a question during a job interview, the interviewer says 'are you sure you're a "senior programmer?"'
I don't see a problem with either (and heard far worse in my professional life; the best to date would more or less translate to "what are you ? a bunch of fucking monkeys ?").
And how would smart pointers help you if you need to return pointer to a member from a function? Does C++ protects you from moved from unique_ptr? Or from iterator invalidation? Or maybe you can safely use non-atomic shared pointer if you don't need to send it across threads?
While "just enforcing `make_shared`" wouldn't solve all the memory safety issues, it actually can be somewhat practical to "just avoid using any (or most) C++ elements that can access invalid (or uninitialized) memory", instead using the safe, compatible substitutes in the SaferCPlusPlus[1] library.
Considering that their team likes Go, it seems strange to me that they would consider Rust over Go for the storage layer. A storage layer should be IO-bound, and should hardly trouble the CPU; the choice of language really should not be a determining factor. The big wins in that space are architectural, not language specific.
"A storage layer should be IO-bound, and should hardly trouble the CPU; the choice of language really should not be a determining factor."
This used to be true, but it's out-of-date now. You can now get a network pipe in to a system that a rather beefy multi-core CPU using a user-space TCP stack can barely keep up with, let alone do any real work, and if you can scrape up the PCI express lanes, putting a few of the latest SSDs into a system can start getting you theoretical maximum bandwidth numbers that just a few years ago looked more like what you'd expect for a RAM bandwidth number.
I'm of the opinion that it was already not as true as commonly supposed 5 years ago (in my experience using slow languages on putatively IO-bound tasks was still noticeably slower than using fast languages), but the latest in network pipes and SSDs have really ended it. It's true that on most desktop systems you've still got more CPU than you know what to do with, but as you step into the serious database space that's not true anymore. For a serious database I wouldn't be perturbed if someone looked at Go's performance and just plain discarded it on the spot, even before considering GC issues. It's very fast for a scripting language; it's fairly slow for a compiled language. "The compiler spends hardly any time on optimization" is not what you want to read about your database implementation language.
(I've got one of the nvme SSDs in my laptop, and it is interesting to see just how many CPU bottlenecks there still are in systems nowadays. In some sense, I really shouldn't ever see a "loading" screen because you "ought" to be able to read things off of my SSD fast enough to completely fill my RAM in 5-10 seconds; "merely" loading Firefox ought to be somewhere in the 50ms range. In practice I still see loading screens and load waits, because the CPUs are still doing things. Lots of things that used to be dominated by and hidden in the load time, but aren't anymore.)
You can also do a simple math analysis to see it. If you have an incoming 10Gbps connection, a single core machine has approx. 1/3rd of a cycle per bit to do everything it's going to do with that packet. Even going to a 128 core machine and assuming perfect parallelism with some sort of magical packet muxer gives you a whopping 43-ish cycles per bit. I've never worked on this myself, but I saw a team in my company working with it and were pretty pleased to be able to push ~2Gbps through their 10Gbps network connection with a pretty beefy machine, and just about all they were doing was relatively simple load balancing.
And cache. Compiled languages tend to be more cache friendly, which contributes a lot to speed of execution.
And too many apps still do cause CPU spikes, often for quite a bit of time, and no doubt much potential for optimisation lies there. The popular perception that CPUs are fast enough to deal with nearly every workload is inaccurate. Even an innocuous bit of Javascript on a webpage, probably doing some trivial stuff, causes 100% usage for several seconds.
Developing a storage layer as elaborate as RocksDB from scratch is quite an endeavor, and wanting to just use RocksDB instead of making your own is a smart decision. From there, Go is sort of easy to throw out of the picture: using cgo kills performance and safety. I say this as a person who uses Go as my workhorse, have used it for many years and has a favorable opinion of it.
Ah, my mistake. I misunderstood. I thought that they were replacing the C++-based RocksDB storage with one written in Rust. Yes, it makes perfect sense to use something if it is already proven itself.
Rust would use less memory than Go. (Dropbox also likes Go and used Rust over Go for the storage layer, and when asked, memory usage was their primary reason.)
Anybody have experience with TiDB? How does it stack up against CockroachDB? Seems hard to find comparison. Probably hear less about it mostly because it's developed in China? Looks like it's an impressive piece of tech, though.
A big difference is that TiDB is not ready for production yet.
Having followed to the project for a while, another distinction is that TiDB is operationally more complex. You need to build and deploy TiDB (high-level query engine), TiKV (key/value store) and PD ("placement driver", which coordinates sharding and data migration) separately. TiDB is stateless and can be scaled freely, but TiKV and PD are both stateful and implement their own distributed consensus systems. PD actually embeds Etcd, whereas TiKV has its own Raft implementation in Rust. Compare this to Cockroach, which has a single monolithic daemon that you deploy everywhere, which contains both the distributed query engine, the key/value store, consensus/cluster coordinator, etc. (There may be benefits or drawbacks to the difference in design; I don't know the internals of either project well enough to debate that.)
For an internal project I'm working on, running TiKV standalone actually looks very interesting, but it's not very well documented yet.
Yeah, but unwrap is very, very explicitly marked, and its use is discouraged, and unlike null it's nothing close to an undesirable edge case on every single pointer.
Edit: In my perfect language there wouldn't be an unwrap, only matching on an Option, but I guess Rust is too pragmatic for that.
Yeah, but using Option is encouraged, while using Option.unwrap as error handling is discouraged.
Ok, to be more precise, Option in Rust, isn't a null pointer. It's a nullable pointer. Practically speaking only Option::None is the null pointer. You can either deal with it (using if-let or match) or you can `unwrap` and assume it's never null. If you make that assumption and if and only if it was actually Option::None, will it throw null pointer exception.
In contrast something like C/Java will allow you to use your nullable pointer (because all pointers/references are nullable by default) without any check and it's relatively easy to skip this step. In Rust, it's relatively hard to skip this step.
Use of Option.unwrap (or Option.expect) in libraries is not discouraged, and it shouldn't be. It's an excellent way of checking a runtime invariant. Use of Option.unwrap is discouraged for error handling.
Stated differently, if a library you're using causes a panic, then it should be interpreted as a bug. The bug might be in the library, or it might be in the way that the library is being used (assuming the panic conditions have been documented as part of the library API's contract).
A separate argument says that you should reduce the number of places where your code can panic. That sounds like a fine goal to strive for, but must be balanced with other things.
Right, but what you did not get was undefined behavior. That is the real difference, not that Rust "discourages" unwrap(). unwrap() panics, while unfortunate, are well defined.
Halting in a defined manner vs. continuing computation while undefined conditions prevail are fundamentally different outcomes. This is indeed a difference.
You simply cannot know that and your assertion reveals a lack of understanding. Computation may continue indefinitely, unintentionally distributing your private keys or doing some other heinous thing forever, without any sort of MMU trap. Undefined means undefined, not 'segfault.'
Closures naturally generate cycles in the data dependency graph. A way out would be to copy the environment of a closure, but that would mean a performance penalty.
Whether and how closures generate cycles, and consequently the best implementation strategy, depends heavily on the language, though. You might have a strictly nested call stack or thunks and continuations; shared mutable environments or immutable copies and moves; copyable environments or linear/affine closure types; boxed closures that can be stored in data structures or unboxed closures that can’t always; first-class or second-class closures; a GC to rely on or none; &c.
It is kind of funny how software engineers can engage in lengthy discussion about tooling. Imagine the same for architects. Instead of looking at the building they would talk about the type of hammer they used while building it.
With software the material you construct your creations influences the means. Architects most certainly do argue about whether they should use cross-laminated timber, reinforced concrete, glulam, or steel. They talk about these things and write long pieces on them. The materials influence the design of the building.
They don't talk about it on blogs on the Internet because that's not where the audience is. But they do talk about this.
I think "hammer" is pretty diminutive as a parallel for a choice of language. I feel a better parallel would be "architects talking about steel alloys versus composite materials" which is not crazy.