Hacker Newsnew | past | comments | ask | show | jobs | submit | bborud's commentslogin

And then you introduce extra two levels of nested loops and suddenly "i", "j", and "k" don't make any sense on their own, but "ProductIndex", "BatchIndex" and "SeriesIndex" do.

ijk for indices in loops are actually clearer than random names in nested loops precisely because it is a *very common convention* and because they occur in a defined order. So you always know that "j" is the second nesting level, for instance. Which relates to the visual layout of the code.

You may not have known of this convention or you are unable to apply "the principle of least astonishment". A set of random names for indices is less useful because it communicates less and takes longer to comprehend.

Just like most humans do not read text one letter at a time, many programmers also do not read code as prose. They scan it rapidly looking at shapes and familiar structures. "ProductIndex", "BatchIndex" and "SeriesIndex" do not lend themselves to scanning, so you force people who need to understand the code to slow down to the speed of someone who reads code like they'd read prose. That is a bit amateurish.


> ijk for indices in loops are actually clearer than random names in nested loops precisely because it is a very common convention and because they occur in a defined order. So you always know that "j" is the second nesting level, for instance. Which relates to the visual layout of the code.

In problem domains that emphasize multidimensional arrays, yes.

More often nowadays I would see `i` and think "an element of some sequence whose name starts with i". (I tend to use `k` and `v` to iterate keys and values of dictionaries, but spell `item` in full. I couldn't tell you why.)


I partly agree, and partly don't. When ijk really is unambiguous and the order is common (say you're implementing a well-known algorithm) I totally agree, the convention aids understanding.

But nesting order often doesn't control critical semantics. Personally, it has much more often implied a heuristic about the lengths or types (map, array, linked list) of the collections (i.e. mild tuning for performance but not critical), and it could be done in any order with different surrounding code. There the letters are meaningless, or possibly worse because you can't expect that similar code elsewhere does things in the same nesting order.

This likely depends heavily on your field though.


I think I know what you mean. Let's assume a nesting structure like this:

Company -> Employee -> Device

That is, a company has a number of employees that have a number of devices, and you may want to traverse all cars. If you are not interested in where in the list/array/slice a given employee is, or a given device is, the index is essentually a throwaway variable. You just need it to address an entity. You're really interested in the Person structure -- not its position in a slice. So you'd assign it to a locally scoped variable (pointer or otherwise).

In Go you'd probably say something like:

for _, company := range companies { for _, employee := range company.Employees { for _, device := range employee.Devices // ..do stuff } }

ignoring the indices completely and going for the thing you want (the entity, not its index).

Of course, there are places where you do care about the indices (since you might want to do arithmetic on them). For instance if you are doing image processing or work on dense tensors. Then using the convention borrowed from math tends to be not only convenient, but perhaps even expected.


I think this may be related to how people read code. You have people who scan shapes, and then you have people who read code almost like prose.

I scan shapes. For me, working with people who read code is painful because their code tends to to have less clear "shapes" (more noise) and reads like more like a verbal description.

For instance, one thing I've noticed is the preference for "else if" rather than switch structures. Because they reason in terms of words. And convoluted logic that almost makes sense when you read it out loud, but not when you glance at it.

This is also where I tend to see unnecessarily verbose code like

func isZero(a int) bool { if a == 0 { return true } else { retur false } }

strictly speaking not wrong, but many times slower to absorb. (I think most developers screech to a halt and their brain goes "is there something funny going on in the logic here that would necessitate this?")

I deliberately chose to learn "scanning shapes" as the main way to orient myself because my first mentor showed me how you could navigate code much faster that way. (I'd see him rapidly skip around in source files and got curious how he would read that fast. Turns out he didn't. He just knew what shape the code he was looking for would be).


> strictly speaking not wrong, but many times slower to absorb. (I think most developers screech to a halt and their brain goes "is there something funny going on in the logic here that would necessitate this?")

I agree with this, but can't see how this applies to variable naming. Variable names can be too long, sure, but in my opinion, very short non-obvious variable names also make scanning and reading harder since they are not familiar shapes like more complete words. Additionally, when trying to understand more deeply, you have to stop and read code more often if variable's meaning is not clear.

That said, 1-2 char variable names work well in short scopes, like in some lambda, or when using 'i' for an index in a loop (nested loops would depend on situation), but those are an exception.

Like always, this is probably subjective too. And well-organized codebase probably helps to keep functions shorter, but there's often not much I can do about the existing codebase having overgrown functions all over.


> I think this may be related to how people read code. You have people who scan shapes, and then you have people who read code almost like prose.

I think this is an astute observation.

I think there is another category of "reading" that happens, is what you're reading for "interaction" or "isolation".

Sure c.method is a scalable shape but if your system deals with Cats, Camels, Cars, and Crabs that same c.method when dealing with an abstract api call divorced from the underlying representation might not be as helpful.

I would think that we would have more and better research on this, but the only paper I could find was this: https://arxiv.org/pdf/2110.00785 its a meta analysis of 57 other papers, a decent primer but nothing ground breaking here.

> I scan shapes. ... verbal description.

I would be curious if you frequently use a debugger? Because I tend to find the latter style much more useful (descriptive) in that context.


dealing with an abstract api call divorced from the underlying representation

I don't understand what you mean. Could you give me an example?

I would be curious if you frequently use a debugger?

I practically never use a debugger.


The shape argument works well in small packages but it starts to fail once you have multiple domain models starting with the same letter

I wasn’t talking about just symbols but entire paragraphs of code as well.

I think this is pretty insightful, and I might add this as another reason LLM code looks so revolting. It's basically writing prose in a different language, which make sense - it's a _language_ model, it has no structural comprehension to speak of.

Whereas I write code (and expect good code to be written) such that most information is represented structurally: in types, truth tables, shape of interfaces and control flow, etc.


If your loops are so long you can't fit them on one screenfull you have much more fundamental issues.

You arent wrong, but it is not an absolute.

Furniture maker, house framer, finish carpenter are all under the category of woodworking, but these jobs are not the same. Years of honed skill in tool use makes working in the other categories possible, but quality and productivity will suffer.

Does working in JS, on the front end teach you how to code, it sure does. So does working in an embedded system. But these jobs might be further apart than any of the ones I highlighted in the previous category.

There are plenty of combinations of systems and languages where your rule about a screen just isn't going to apply. There are plenty of problems that make scenarios where "ugly loops" are a reality.


I didn't say it was an absolute. But once a scope grows to the point where you have to navigate to absorb a function or a loop, both readability and complexity tends to worsen. As does your mental processing time. Especially for people who "scan" code rapidly rather than reading it.

The slower "readers" will probably not mind as much.

This is why things like function size is usually part of coding standards at a company or on a project. (Look at Google, Linux etc)


person.Age is easier to understand than p.Age regardless of the loop size.

Another point of view: ideally it would just be "Age". But in languages that don't have the ability to "open" scopes, one might be satisfied p.Age, being "the age". I've also seen $.age and it.age, in languages with constructs that automatically break out "it" anaphora.

I've spent a lot of time in the past 15 years turning photos into various kinds of prints. From Cyanotypes using printed contact negatives, via multi-layer stencil art to my current obsession: vectorizing images, separating the layers, machining linoleum blocks and then doing multi-layer prints. Once I have a stable workflow for lino prints the next think I'm going to try is to use mokuhanga instead of linoleum.

(I also plan to try platinum/palladium prints. They look gorgeous. But first I need to get better at shooting for B/W)


The problem isn't that keepers necessarily need editing, the problem is that it is tempting (to some) to spend more time than they need simply because they can. Or because they feel they should. (Don't watch people ruining photos and making up for their lack of talent on youtube)

I shoot with post-processing in mind because I have years of experience with the cameras I use, so I know how they work. I rarely do that much more than just "normalizing" the pictures to what I wanted to capture (fix one, apply to whole batch) and apply some look that I've saved as a preset. Perhaps 1-5 seconds of tweaks per photo. If you need more, you probably didn't get the shot in the first place and you'll do better next time.

For me the time spent "editing" photos is marginal compared to the time I spend looking at the photos to decide which ones are keepers.

I can't understand what the youtubers who edit photos are doing. Most of them take mediocre to bad shots and then somehow manage to make them worse. And then people believe that this is what they're supposed to be doing

Then again, most photo-influencers don't actually understand even something as basic as focal length (no, a 105mm is a 105mm regardless of whether you put it in front of a tiny sensor or a big honking medium format)


> Most of them take mediocre to bad shots and then somehow manage to make them worse.

Examples of this? What do you consider mediocre, but is still hugely popular?


It would be unkind to single anyone out so I am not going to. The thing is, I can for the life of me not think of a single photo influencer/youtuber who is also a popular _photographer_. They're popular influencers/youtubers.

As for what I mean by mediocre: let's say you are looking at a portrait of someone you don't know. If you can't remember it 10-30 minutes later, it was probably mediocre or worse. Would you recognize the subject if you met them on the a street one day later?

Most portraits tend to be bad because they completely fail to capture the subject. People fuss over lighting and editing and color grading and whatnot, but they don't actually pay attention to the person they are shooting. I see quite a few of these people with huge social media followings who can't, for the life of them, take pictures of humans. And yet, they teach their inability to make portraits to others.

I also know professional photographers who are genuinely bad at taking portraits. And then there are those rare people who just nail it most of the time. Notice this when looking at a portrait of someone you know. Is it "them"?

Another category where you see a lot of bad photos is wildlife photography. You will see endless pictures of birds that possibly could go in a bird-spotting book purely for identification purposes. But, to steal a line from my wife after looking at a certain facebook group "it's just a bunch of tack sharp ducks set against blurred out sky". And my wife spends an inordinate amount of time looking at birds.

All you need to make bad nature photography is a big lens, a location and some time. It takes no talent. All you need to make technically good, but completely pointless nature photos is a big lens, a location, time and a decent modern camera. Then turn on 3D tracking and spray whenever something moves. Animals live in nature -- they belong in context -- they do things. Good nature photographers manage to communicate this.

(I was actually tempted to name the "it's just a bunch of tack sharp ducks..."-group, but I'm not going to. Though it isn't that hard to guess).

We drown in technically excellent images that are dull as crap.

(To be clear: I'm a mediocre photographer. I'm very aware of it. I occasionally shoot something that may be worth looking at -- but still rarely something you'd remember)


Thanks for the reply, I appreciate it.

Laws appear to have fallen out of fashion. And a disturbing proportion of the loudest people like it. Then you have those who ought to know better but are attention-seeking, selfish assholes who somehow find it «interesting» or think they adhere to «principles».

The latter category know who you are. You downvoted this comment.


I recently provided guidance to state legislators, with that guidance making its way into law in regards of balcony solar. If you don’t think that making law works, I would encourage you to get involved somewhere that means something to you.

It turns out that if you present as an honest, non-interested party, people will call you and ask you for your advice. I do admit that the ease of this is going to be a function of the people you are up against and the subject being regulated. My point of this comment is: default to action. “You can just do things.”


The issue is not laws, or the making of them (although Congress hasn't exactly been overly productive). The issue is the executive branch not abiding by laws.

> Laws appear to have fallen out of fashion.

Laws are very much fashionable, but only for us. “Rules for thee but not for me” is what's in season right now.


Importantly, seasons change.


I recently saw “Mountainhead”. Apropos.


Had to look it up. Added to the list.


Given that Apple tend to have long periods of crud accumulating and releases becoming slower, buggier and more annoying they should revamp their entire release process and make quality a more prominent part of the release process. Linux did so with its odd/even version numbering to signal which kernels were considered stable and which were development versions.

For each major release cycle the longest part of the cycle should be focused on code quality and cleanup. So that people who depend on the stability of their operating environment can configure the software update process to just wait until a new OS release has gone through a bugfix AND cleanup cycle.

Why spend more time on cleanup that on features? Well, so far it seems to have been the other way around. Which means that everyone has to waste a lot of time while some experimental OS is making your life miserable. People who want to use bleeding edge features can upgrade as soon as a new major release is dropped. But people like me, who depend on their phone and computer to make a living, would rather not be field-testing buggy, slow experimental code.

And not to put too fine a point on it, iOS was crap. And from what I am hearing macOS Tahoe isn't worth the upgrade so I keep clicking away those annoying popups that try to get me to install it.

Yeah, I get it, the guy from marketing isn't going to like it, but we could also stop pretending that every new major release is a gift to humanity. We don't think so and Apple knows it isn't so. Every release comes with dread. What will stop working this time?

It isn't like Apple doesn't have the means to hire developers.


Berkeley DB is one of those things everyone respected, for some reason, but that didn't actually work if you threw a bit of data at it. And not just for us. I remember talking to companies that paid them lots of money to work on reliability, and it never got better.

But I do remember reading much of the source (trying to figure out why it didn't work) and thinking "this is pretty nice code".


Well, it worked for Amazon — Berkeley DB was used extensively there as the makn database, right from the beginning. I remember talking to an ex-Amazon engineer in 2006 who said BDB was still the main database used for inventory, and complained that everything was a mess, with different teams using different tech for everything. Around that time Amazon made DynamoDB to solve some of that mess — and it sat on top of BDB.

An old thread about this: https://news.ycombinator.com/item?id=29290095.


It worked well for Amazon because they kept it within a tight operating envelope. They used it to persist bytes on disk in multiple, smaller BDBs per node. This kept it out of trouble. They also sidestepped the concurrency and locking problems by taking care of that in the layers above. It was used more like SSTables in BigTable.

They phased out BDB before DynamoDB was launched. Some time between 2007 and 2010. By the time DynamoDB launched as a product in 2012(?), BDB was gone.


Can verify. When I started in the catalog department in '97, "the catalog" was essentially a giant Berkeley DB keyed on ISBN/ASIN that was built/updated and pushed out (via a mountain of Perl tools) to every web server in the fleet on a regular cadence. There were a bunch of other DBs too, like for indexes, product reviews, and other site features. Once the files landed, the deploy tooling would "flip the symlinks" to make them live.

Berkeley DBs were the go-to online databases for a long time at Amazon, at least until I left at the turn of the century. We had Oracle databases too, but they weren't used in production, they were just another source of truth for the BDBs.


yeah - scars still visible here from a year 2000 project using BerkeleyDB. Unbelievable complexity to write adapters to ordinary desktop software.


That bit reminded me of someone who wanted us to design a patch the size of a small postage stamp, at most 0.2mm thick, so you could stick on products. It was to deliver power for two years of operation, run an LTE modem, a GNSS receiver, an MCU, temperature and humidity sensor and would cost $0.10. And it would send back telemetry twice per day.


'A mere matter of engineering'.


The conversation went something like this (from memory):

- We can't do that

- Why not?

- Well, physics for one.

- What do you mean?

- Well, at the very least we need to be able to emit enough RF-energy for a mobile base station to be able to detect it and allow itself to be convinced it is seeing valid signaling.

- Yes?

- The battery technology that fits within your constraints doesn't exist. Nevermind the electronics or antenna.

- Can't you do something creative? We heard you were clever.

I distinctly remember that last line. But I can't remember what my response was. It was probably something along the lines of "if I were that clever I'd be at home polishing my Nobel medal in physics".

Even the sales guy who dragged me into this meeting couldn't keep it together. He spent the whole one hour drive back to the office muttering "can't you do something creative" and then laughing hysterically.

I think the solution they went for was irreversible freeze and moisture indication stickers. Which was what I suggested they go for in the first 5 minutes of the meeting since that a) solved their problem, and b) is on the market, and c) can be had for the price point in bulk.


That's so hilarious. I've had a couple that went in that direction but nothing to come close.

To be fair though, there is a lot of tech that to me seems like complete magic and yet it exists. SDR for instance, still has me baffled. Who ever thought you'd simply digitize the antenna signal and call it a day, hardware wise, the rest is just math, after all.

When you get used to enough miracles like that without actually understanding any of it and suddenly the impossible might just sound reasonable.

> Can't you do something creative? We heard you were clever.

Should be chiseled in marble.


The purely digital neighborhood of the SDRs is much easier to explain than the analog rat droppings between the DAC/ADC and the antenna. That part belongs to dark wizards with costly instruments that draw unsettling polar plots, and whose only consistent output is a request for even pricier gear from companies whose names sound an awful lot like European folk duos.

The digital end of SDRs are simple. Sample it, then once you have trapped the signal in digital form beat the signal into submission with the stick labeled "linear algebra".

(Nevermind that the math may be demanding. Math books are nowhere near as scary as the Sacred Texts Of The Dark Wizards)

"Rohde & Schwarz — live at the VNA, 96 dB dynamic range, one night only."


> whose names sound an awful lot like European folk duos.

That had me laughing out loud, you should have left the name out to make it more of a puzzler :)

I apparently have been drawn to the occult for a long time and feel more comfortable with coils, capacitors and transmission lines than I do with the math behind them. Of course it's great to be able to just say 'ridiculously steep bandpass filter here' and expect it to work but I know that building that same thing out of discrete components - even if the same math describes it - would run into various very real limitations soon.

And here I am on a budget SDR speccing a 10 Hz bandfilter and it just works. I know there must be some downside to this but for the life of me I can't find it.


> I know there must be some downside to this but for the life of me I can't find it.

Literally Goethe's Faust (A Tragedy, Part I) .. you're good unless a poodle transforms into Mephistopheles on your deathbed.


I knew it ;)


I like your sales guy. Might have punched them after a while but that's right up there with the time someone tried to tell me there was no iron in steel because it wasn't in the ingredients list. And this someone sold stamped steel parts!


All you need to do is make use of a higher dimension to pack stuff into. And then mass produce to bring costs down. How hard can that be?


Skippy the Magnificent will solve this for us.

(reference to a character in the Expiditionary Force series by Craig Alanson

Only a very small portion of his physical presence is in local spacetime, with the rest in higher spacetime. He can expand his physical presence from the size of an oil drum or shrink to the size of a lipstick tube. He can’t maintain that for long without risking catastrophic effects. If he did, he would lose containment, fully materialize in local spacetime and occupy local space equal to one quarter the size of Paradise. The resulting explosion would eventually be seen in the Andromeda Galaxy.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: