Watch out for bikes and pedestrians please. Few people expect a car (especially on the other side of the street/lane) to drift by a space then reverse into it after passing it by.
If a person is walking north, about to walk past a parking space, and a car is heading south and signaling left, it's very surprising to see the car pass the person by, then suddenly reverse and try to enter the space. Once the car is past, most people assume it's signaling to turn farther along, instead of about to go backwards.
EDIT: To be clear, I'm describing a perpendicular-parking scenario, where people usually make a right angle turn going forward to occupy the space.
This is one of many reasons why parking maneuvers should be performed slowly. I once turned into a space to park only to encounter a woman on the other side of her housewife panzer tank. The only reason I didn't hit her was that I was going about 5 miles per hour the entire time, so when she came into view I could just put my foot on the brake and stop gently.
Contrast this with drivers I've seen who will haul ass into a parking space at 15 MPH and then brake immediately.
When you hang yourself with C, at least you're hanging yourself with something real. It's more embarrassing to be done in by a misunderstood abstraction in a high-level language than a concrete memory overwrite in C.
You can hang yourself in C by having code that would dereference a NULL pointer that never actually would be executed, yet the compiler “optimises” your code based on it. Is that “real”?
A farmer will put the animals' comfort and health and safety ahead of their own. They'll work long hours through heat waves and winter storms, get up in the middle of the night to check that everything is all right, spend every waking and sleeping moment worrying about the livestock, individually and collectively.
Sorry, you're right, I should rethink my argument. Billion dollar businesses aren't made on chainsaws and rocket engines. They're mostly made in conference rooms by salespeople with capitol. So, identifying as a chainsaw or rocket engine is basically just an admitting that you're a tool willing to work for less than what the salespeople think what your labor is worth.
You aren't going to change the world as an IC. If you're really a 10x engineer, maybe put that 9x difference back into learning how to market yourself or a product, if you really want to make a difference.
You're not working for "less than what someone thinks your worth", you're working for a price set by the market and an assessment of what level of quality your work is.
A developer gets paid before the salespeople have anything to sell. It's up to the company and its salespeople to try to not lose money on their deal with their R&D people. Considering the context of the article (Google engineer) and how much people in that market get paid, making money after you pay them is a tall order. Most businesses fail.
>You're not working for "less than what someone thinks your worth", you're working for a price set by the market and an assessment of what level of quality your work is.
So what about that 1x engineer who negotiated for 50k more than you, even though your output is better than his. Your company isn't jumping to give you a raise. Is that just "the market"?
I wish it wasn't true, but Firefox seems heavier on low-spec Linux laptops. A while back I was writing code on the road on a $200 HP Stream running Ubuntu, and I needed multiple tabs open with audio to debug. Switching from Firefox to Chrome was a noticeable improvement, where the UI remained responsive in all the tabs.
In the long run, having a company dependent on a handful of highly-paid superstars is an incredible risk. Moreover if one team's hiring practices create a "class system" where there are vast disparities in how engineers are paid, there are going to be resentments.
If you build a special team in a company that has different norms overall, everything that you accomplish is going to be discounted according to how much it makes the rest of the company harder to manage, and how vulnerable the company becomes to a small number of superstars leaving. There's a bigger picture than just what the team gets done.
EDIT: That said, if the results are markedly better than the company's general performance, the company as a whole might need some re-tooling. But a special team isn't going to stay special for long, either way.
Teams with better people being more productive and getting paid more seems like a normal thing to expect. It seems weird to expect all people and teams to be only to be average, and to see deviations from this as highly risky.
It doesn't make sense to attack the special team here as that team exists to show what is wrong with other parts of the organization, and how productivity might be improved if incentives/measures were changed.
It's also one managers attempt to be more effective. Should one not strive to be more effective? Is it better to make sure that one doesn't differ from average as that would be risky to the business and create resentment?
If a more productive team emerged naturally from incentives available to everyone, and that team collected large bonuses as a result of proven success, I think the company and the team would both be better off.
If a team is designed from the beginning with a different set of rules than the rest of the company, it has a lot to answer for before it even begins, and its achievements will be accordingly discounted.
This is a fantastic post, and it looks like this blog is full of fantastic posts. When I have some time I'm definitely going to play around with that code.
This is going to sound sarcastic but it's not: Can we get back to just putting the members of C structures into network byte order and sending that over the wire in binary, à la 1995?
IIRC: capnproto generates messages that you could deserialize by casting them to the right struct, but refrains from actually doing it that way. Instead it generates a bunch of accessor methods that parse the data, as if you were reading something that's not basically a c-struct, like a protobuff.
That's basically correct. Cap'n Proto generates classes with inline accessor methods that do roughly the same pointer arithmetic that the compiler would generate for struct access.
There's a couple subtle differences:
* The struct is allowed to be shorter than expected, in which case fields past the end are assumed to have their schema-defined default values. This is what allows you to add new fields over time while remaining forwards- and backwards-compatible.
* Pointers are in a non-native format. They are offset-based (rather than absolute) and contain some extra type information (such as the size of the target, needed for the previous point). Following a pointer requires validating it for security.
Re-read the comment I think. It doesn't say casting a struct pointer. It says putting the members of the struct into network byte order over the wire. I read that as individually serializing each member in a portable, safe way.
Anyway even if you do choose the struct pointer hack (which I do not see advocated here) it can be done relatively well albeit requiring language extensions and a bit of care. Pragmas and attributes to ensure zero padding and alignment between members. No pointer members. Checking sizes and offsets after a read (the hardest part).
"As of this writing, Cap’n Proto has not undergone a security review, therefore we suggest caution when handling messages from untrusted sources."
Something like that has to be rigorously tested or proven to be free of buffer overflows. It's so easy to attack with malformed messages. Parsers for remote messages are a classic source of vulnerabilities. It's hard to test this, because it's a code generator.
This looks promising as an attack vector for a big system built on microservices. If you can find an exploit in this that lets you overwrite memory, and can break into some service of a set of microservices by other means, you can leverage that into a break-in of other services that thought their input was a trusted source.
The "zero overhead" claim goes away as soon as you send variable length items. Then there has to be some marshaling.
> As of this writing, Cap’n Proto has not undergone a security review
This is outdated, I should remove it. Cap'n Proto has been reviewed by multiple security experts, though not in a strictly formal setting. I trust it enough to rely on it for security in my own projects, but yeah, I am cautious about making promises to others...
> Something like that has to be rigorously tested or proven to be free of buffer overflows.
I've done a bunch of fuzz testing with AFL and by hand. I've also employed static analysis via template metaprogramming to catch some bugs. See:
> The "zero overhead" claim goes away as soon as you send variable length items. Then there has to be some marshaling.
Space for messages is allocated in large blocks. The contents of the message are allocated sequentially in that space and constructed in-place. So once built, the message is already composed of a small number of contiguous memory segments (usually, one segment), which can then be written out easily. Or, if you're mmaping a file, you can have the blocks point directly into the memory-mapped space and avoid copying at all -- hence, zero-copy.
I would like to submit apples archaic “Rez”[1] as a great language for declaring binary formats. It was designed to be able to describe c and pascal structures.
The wire encoding for protos is much more compact than the in-memory representation, especially for sparsely populated messages (very common especially in mature systems).
You'd still have to figure out some way to serialize nested messages. Note that you can have recursive message definitions.
Is that less of a configuration mess than WCF was? JSON isn't "The Magical Elixir" of data exchange and I'm more than open to something better but at least we (in the .NET community) have moved past the WCF configuration nightmares.
WCF is an unmitigated dumpster fire. We have actually written a non-WCF client that uses a raw HttpClient implementation with StringBuilder to compose SOAP envelopes around cached XMLSerializers in order to talk to other WCF services. First request delay went from 1-2 seconds down to a few milliseconds. Memory overhead is negligible now. Prior, you could watch task manager and immediately recognize when WCF is "warming up". Additionally, the XML serializer in .NET seems almost pathologically determined to ruin everything you seek to accomplish.
By comparison, JSON contracts are an absolute joy to work with. We still practice strong-typing on both sides of the wire (we control both ends), and have pretty much nothing to complain about. If you are concerned with space overhead w/ JSON, simply passing it through gzip can get you down to a very reasonable place for 99% of use cases. I understand that there are arguments to be made against JSON for extremely performance sensitive applications, but I would counter-argue that these are extremely rare in practice.
They are the same size as UTF-8 numbers but much slower to decode. I think the more-bit format is the only glaring mistake in proto that can never be fixed.
C structs do not compose extensively. Protobufs do. You can't put variable length data into a struct, and hence you can't put extensible structs into it either.
You definitely can but it's not as obvious, make a separate message type for list elements and append them on the wire. If you only have one list at the tail, you can use a flexible array[] at the end but it's finicky to deal with if you need more than one.
You can build large hierarchical structures of messages with lists contained therein. It's pretty much how .mov/.mp4/many, many media container formats work. The technique dates back to the Amiga days.
This is practically exactly what Protobuffers are. Except that they actually are defined clearly enough for multiple services written in multiple languages can work with them.
Definitely not, protobuf's strange wire format becomes apparent if you ever look at the hexdump of one or the profiler output of your favourite protobuffer-decoding C/C++ application.
They're actually kind of performance heavy for no benefit.
I have once looked at a benchmark that compared protobuffer, message pack, json and a variety of other serialization formats. In terms of reducing bytes per message gzipped json was ahead of all of them at the cost of increased CPU time for gzip. Protobuffer did pretty poorly, the only benefit was decreased CPU usage. I'm sure you could use some other compression algorithm like LZMA to get both good compression and good performance for JSON messages.
> In terms of reducing bytes per message gzipped json was ahead of all of them
Try gzipping the protobuf. Binary encoding and compression are different things which can be stacked. Gzipped protobuf should be smaller and faster than gzipped json in basically all cases.
I use LZ4 (with "best" compression) for packet captures and replay with great results.
I get about a 37% compression ratio with extremely fast decoding, like 10 million packets per second off an SSD.
It was better than snappy, gzip, and bz2 for the trade-off of compression time, decompression time and file size.
As for protobuf: flatbuffers, capn proto, HDF5, and plain C structs all deliver much, much faster decoding time. It's really not the best answer for any serialization at this point but it's still inexplicably popular.
Sure, but anything you're trying to transport between languages which don't even agree on endianess will end up like this.
Dumping a struct on a wire is just a wishful dream that turns into a nightmare as soon as you need to send that to a service written in another language or running on another architecture.
Don't get me wrong - there's plenty of insanity in protobufs. But trying to cover the same use-case will not create a simple protocol.
Cap’n’proto isn’t well supported apart from C or Rust.
Python library is an absolute nightmare. Their tests used to catch Exception, and what they ended up testing was basically whether their test try to access nonexistant attributes.
The issue is that capnproto is relatively more complex, and as such is harder to implement well.
The memory layout of a C struct is ABI and compiler dependent.
Some compilers conform to same ABI in same system or similar system and work almost exactly the same, so you may grow old thinking that's how it is until it's too late. I think gcc, clang and Intel work almost the same in Linux and OSX.
Indeed, that's why I specified putting the members of the C structure on the wire, not the structure as a whole, so it's just basic types in network byte order (i.e. consistent endian-ness) being sent.
I've worked on an application where that was the standard data transfer scheme, and then while working with protobuf on another project felt that after looking under protobuf's covers it was doing something very similar but wrapping an entire API around it.
No, not really. #pragma pack and/or __attribute__((packed)) have been supported for eons now and guarantee the alignment of struct members between compilers.
In newer C++ specs, you can also static assert that the struct is a POD type to statically ensure that there's no accidental vtable pointer.
This argument pops up every time someone mentions this and every time it's completely uninformed.
Though it should be noted that packed structures cause compilers to produce absolutely garbage code when accessing them (because most of the accesses become unaligned) and it becomes incredibly memory-unsafe (as in "your program may crash or corrupt memory") to take pointers of fields inside the struct because they are (usually) presumed to be aligned by the compiler.
Explicit alignment doesn't suffer from this problem nearly as badly (yeah, you might have to add some padding but that's hardly the end of the world -- and if you have explicit padding fields you can reuse them in the future).
Why even put them in network byte order? Every modern system is little endian, if you standardize on that, only exotic systems would have to deserialize anything.
If you force the most common system to translate byte order, then you'll have some confidence that your code is performing the translation correctly. If instead you rely on hoping that everyone added the correct no-op translation calls everywhere, you'll find your code doesn't work as soon as you port it to another CPU.
This is a nice side effect of network byte order being the opposite of the dominant cpu order, though obviously it was never intended.
Because when someone builds a hugely popular exotic system in the future, because it is one (1) cent cheaper, you'd end up with code that has to check to see if it's running on such a system.
This doesn't make any sense for multiple reasons, but especially because you wouldn't be checking anything in the first place. A big endian system would would reorder bytes and a little endian system would just use it directly from memory without another copy or reordering anything.
There's not a library pattern for host to little endian, or little endian to host, like we have with hton and ntoh. Which makes it more likely to be messed up.
> Written by a VC trying to get engineers to join the startups he's invested in.
Actually for most of the piece he was pretty straightforward about making sure you get equity if you want to get paid upon company success.
What has changed is that genuine equity in startups isn't being offered as readily as it was ~15 years ago (and it is notable that the missed opportunity he describes is that far in the past).