Hacker Newsnew | past | comments | ask | show | jobs | submit | acuozzo's commentslogin

> the idealism of open source shouldn't have survived its contact with capitalism

The US economy of the 1980s, 1990s, and 2000s made it possible.


> And really, really boring and slow a lot of the time.

It's not boring on a giant display with the original 6-track mix playing just a tad too loud all around you. I've seen it in 70mm at the AFI in Silver Spring, MD; candy for the eyes and ears.

It would likely be boring if played at a quiet volume on a small display. This is because movies are, in part, spectacle. Cirque du Soleil would likely be boring too if viewed very, very far away.


> Stock markets are at 10 year peaks.

Put the value of gold on the X axis instead of the USD.


> Of course it's very disruptive for people that lose their jobs,

Why would the Jevons Paradox not apply here?


It does.

There will be loads more people who will want software customized to themselves and their needs!

The catch, of course, is that there are, all of a sudden, a whole lot more people who will now be able to create that software.

How will it all land? No idea. But it just feels like a bad idea to go long on software development when weighed against the opportunity cost of going long on domain expertise.

For instance, from 1980 to 1990, the number of secretaries doing all the typing and filing in the workforce severely constricted. That said, the number of actual typists in the workforce skyrocketed!

No one lost the need for typing and filing services. Tools, (PC, word processors, databases), simply became more available. Which decreased the need for people who were formerly doing the typing and filing as a service. Now people could reliably do the typing and filing on their own.

Jevon's paradox in action! Exponentially more typing and filing is happening today than was happening in 1976 or 1980. At the same time, there are infinitesimally smaller numbers of actual secretaries out in the workforce today than were in the workforce pre-1980. And the ones that are still in the workforce are doing much different work than they did pre-1980.


> Their biggest problem[…]

is demographic in nature. https://www.populationpyramid.net/china/2024/


Great website, not going to downplay the problem, but you can check out other countries, and see that a lot of places - particularly in the West - are f*cked. That China is too, is not much of an upside, Honestly its kinda shocking how bad things are going to get, and Im not sure what can be done if anything at this point.

China is probably the among the best countries in the world to handle so-called "demographic collapse". Elders are relatively healthy and multigenerational households more common. Leader in robotics. News flash: you don't need a billion hard-working peasants in 2026 to be productive.

People in general don't seem to look at how much "productive" population you need in the real economy to support a given population. Things look pretty fine by those metrics and if the AI claims are to believed about to rapidly get even better. How to motivate and compensate that small number of people in the real economy that supports human welfare is a different question.

Also people appear to be blind to the real material limits that really start to be pushed by large populations. You could end up making life materially worse by trying to "fix" the demographics by adding more humans.


Korea has a similar demographic shape, and Japan already passed its peak in 2005ish https://www.populationpyramid.net/japan/2024/

Roujin Z shows Japan saw what's coming over 20y ago, already.

This is a profoundly important - central, even - issue that I am very surprised to not see widely understood or acknowledged.

China is in a life-or-death race against time. A good number of their decisions are explained when viewed through this demographic implosion-bomb they are facing.


The same can be argued for Russia. Many-- myself included --believe it's the #1 reason Putin decided to invade the Ukraine as its youth are seen by the Kremlin as "Russian enough".

> You build stuff, and there is so many manual steps

"The real goal isn’t to write C once for a one-off project. It’s to write it for decades. To build up a personal ecosystem of practices, libraries, conventions, and tooling that compound over time."


You mean, you are not worried about high complexity of codebase because you work with it every day for decades, so you know all this complexity by heart?

This basically requires one to be working solo, neither receiving not sharing the source code with others, treating third-party libraries as blackboxes.

I guess this can work for some people, but I don't think it would work for everyone.


> so you know all this complexity by heart?

No. It's that you've built up a personal database of libraries, best-practices, idioms, et al. over decades.

When you move on to a new project, this personal database comes with you. You don't need to wonder if version X of Y framework or library has suddenly changed and then spend a ton of time studying its differences.

Of course, the response to this is: "You can do this in any language!"

And you'd be right, but after 20 years straight of working in C alongside teams working in Java, Perl, Python, Scheme, OCaml, and more, I've only ever seen experienced C programmers hold on to this kind of digital scrapbook.


I don't see how this can work.

You have your personal string library.. and you move to a new project, and it has it's own string library (it's pretty much given, because stdlib C library sucks). So what next? Do you rewrite the entire project into _your_ string library, and enjoy familiar environment, until the next "experienced C programmer" comes along? Or do you give up on your own string library and start learning whatever project uses?

And this applies to basically everything. The "personal database" becomes pretty useless the moment the second person with such database joins the project.

(This is a big part of Python's popularity, IMHO. Maybe "str" and "logger" are not the best string and logger classes in the world, but they are good enough and in stdlib, so you never have to learn them when you start a new project)


It's not the "string library" that's important, but standardized interface types - so that different libraries can pass strings to each other while still being able to select the best string library that matches the project's requirements.

In C this standardized string interface type happens to the poiinter to a zero-terminated bag of bytes, not exactly perfect in hindsight, but as long as everybody agrees to that standard, the actual code working on strings can be replaced with another implementation just fine.

E.g. a minimal stdlib should mostly be concerned about standardized interface types, not about the implementation behind those types.


Are you saying that if you join existing project which uses acuozzo_strzcpy, and you need to do some string copying, instead of using the same functions that everyone already uses, you'll bring your own library and start using flohofwoe_strjcpy in all code _you_ write? (Assuming both of those work on char* types, that is)?

This.. does not seem like a very good idea if you want your contributions to be received well.


I mean, it depends? The fact that it's possible doesn't mean it's a good idea, but at least it's possible. Maybe flohofwoe_strjcpy has a slight performance advantage in an extremely esoteric edge case but extremely hot loop that wasn't considered by acuozzo_strzcpy.

(Not a GP) I think you can see how poorly the string abstraction argument looks in context of a team-based project. Instead of dismissing it completely I would like to provide an example of a context where C is perfectly fine now.

Consider data compression library like Oodle. Even with closed source and use of dangerous things like multiple threads it is perfectly reasonable deal if game project's budget has money to be spent on performance.

The thing is if game project have money it is not likely to be interested in written in C game engines or core middleware libraries (like physics, sound or occlusion culling). Because after buying a license your team is still expected to work in that code even if official support is very active.

Disclaimer, I work in gamedev and never needed to touch C code.


> neither receiving not sharing the source code with others, treating third-party libraries as blackboxes.

Tbh, this is an intriguing idea. Determine the size of a library (or module in a bigger system) by what one programmer can manage to build and maintain. Let the public interface be the defining feature of a library, not its implementation.


Right.

The article is well written and the author has touched upon all the points which make C still attractive today.


You're experiencing throttling. Use the API instead and pay per token.

You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:

1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.

2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.

3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.

4. Build a mental model for which models are good at which tasks. Mine is:

  3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink

  3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)

  3c. Web Search & Deep Research: Gemini 3-Pro

  3d. Technical Writing: GPT-4.5

  3e. Code Generation & Refactoring: Opus-4.5

  3f. Image Generation: Nano Banana Pro

> You're experiencing throttling. Use the API instead and pay per token.

That was using pay per token.

> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.

That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.

> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.

As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.

> Instruct the model to keep a detailed log in a file and, after a context compaction

No chance of getting anywhere close to needing compaction today. I had to abort long before that.

> Build a mental model for which models are good at which tasks.

See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.


Drives me absolutely crazy how lately any time I comment about my experience using LLMs for coding that isn’t gushing praise, I get the same predictable, condescending lecture about how I'm using it ever so slightly wrong (unlike them) which explains why I don't get perfect output literally 100% of the time.

It’s like I need a sticky disclaimer:

  1. No, I didn’t form an outdated impression based on GPT-4 that I never updated, in fact I use these tools *constantly every single day* 
  2. Yes, I am using Opus 4.5
  3. Yes, I am using a CLAUDE.md file that documents my expectations in detail
  3a. No, it isn’t 20000 characters or anything
  3b. Yes, thank you, I have in fact already heard about the “pink elephant problem”
  4. Yes, I am routinely starting with fresh context
  4a. No, I don’t expect every solution to be one-shotable 
  5. Yes, I am still using Opus fucking 4.5 
  6. At no point did I actually ask for Unsolicited LLM Tips 101.
Like, are people really suggesting they never, ever get a suboptimal or (god forbid) completely broken "solution" from Claude Code/Codex/etc?

That doesn't mean these tools are useless! Or that I’m “afraid” or in denial or trying to hurt your feelings or something! I’m just trying to be objective about my own personal experience.

It’s just impossible to have an honest, productive discussion if the other person can always just lob responses like “actually you need to use the API not the 200/mo plan you pay for” or “Opus 4.5 unless you’re using it already in which case GPT 5.2 XHigh / or vice versa” to invalidate your experience on the basis of “you’re holding it wrong” with an endlessly slippery standard of “right”.


When I wrote my reply I was not familiar with the existing climate of LLM-advice-as-a-cudgel that you describe.

> to invalidate your experience on the basis of “you’re holding it wrong”

This was not my intent in replying to 9rx. I was just trying to help.


GP didn’t, but I’ve found the tips that you’ve shared helpful, so thank you for taking the time.

Nonsense. I have ran an experiment today - trying to generate a particular kind of image.

Its been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.

So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.


> if what you do is surface level (and crap in my opinion)

I write embedded software in C for a telecommunications research laboratory. Is this sufficiently deep for you?

FWIW, I don't use LLMs for this.

> But if you have any taste (which I presume you dont)

What value is there to you in an ad hominem attack here? Did you see any LLM evangelism in my post? I offered information based on my experience to help someone use a tool.


Can you please dig into this more deeply or suggest somewhere in which I can read more?

The economy in the 21st century developed world is mostly about acquiring positional goods. Positional goods as "products and services valued primarily for their ability to convey status, prestige, or relative social standing rather than their absolute utility".

We have so much wealth that wealth accumulation itself has become a type of positional good as opposed to the utility of the wealth.

When people in the developed world talk about the economy they are largely talking about their prestige and social standing as opposed to their level of warmth and hunger. Unfortunately, we haven't separated these ideas philosophically so it leads to all kinds of nonsense thinking when it comes to "the economy".


Money is an IOU; debt. People trade things of value for money because you can, later, call the debt and get the exchanged value that was promised in return (food, shelter, yacht, whatever) I'm sure this is obvious.

I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.

If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.

Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?


Money is less about personal consumption and more about a voting system for physical reality. When a company holds billions in IOUs, they are holding the power to decide what happens next. That capital allows them to command where the next million tons of aluminum go, which problems engineers solve, and where new infrastructure is built.

Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.


The gap for me was mapping [continuing to hoard dollars] to [giving away free goods/services], but it makes sense now. I haven't given economics thought at this level. Thank you!

It's really simple: if you crash the market and you are liquid you can buy up all of the assets for pennies. That's pretty much the playbook right now in one part of the world, just the same happened in the former Soviet Union in the 90's.

I get (and got) that. My focus was specifically on: "its not clear if, approximately speaking, anyone actually has the 'money' to buy any goods now."

Cause it’s mostly bought on credit now, not with cash

Write your work order with phases (to a file) and, between each phase, give it a non-negotiable directive to re-read the entire work order file.

Claude-Code is terrible with context compaction. This solves that problem for me.


No because switching to the API with the same prompt immediately fixes it.

There's little incentive to throttle the API. It's $/token.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: