Hacker Newsnew | past | comments | ask | show | jobs | submit | throw45678943's commentslogin

Who knows? Maybe with the way AI is going that will be considered a lot of money compared to what people earn on this site.

As in what people generally earn on this site will crash way down and be outsourced to these models. I'm already seeing it personally from a social perspective - as a SWE most people I know (inc teachers in my circle) look at me like my days are numbered "cause of AI".


My experience was that .NET programs were typically more tunable for greater perf than Java for many years now even if it didn't come free out of the box which generally is what matters with performance. The ability to optimise further what needs to be optimised means that generally you are faster for your business domain than the alternative - with Java code it generally is harder and/or less ergonomic to do this.

For example just having value types and reified generics as a combination meant you could write generic code against value types which usually meant for hot algorithmic loops or certain data structures a big win w.r.t memory and CPU consumption. For example for a collection type critical to an app I wrote many years ago the use of value types would almost half the memory footprint compared to the best Java one I could find, and was somewhat faster with less cache misses. The Java alternative wasn't an amateur one either but they couldn't get the perf out of it even with significant effort.

It also last time I checked doesn't have a value decimal type for financial math which IMO can be a significant performance loss for financial/money based systems. Anything with math, and lots of processing/data structures for example I would find .NET significantly faster after doing the optimisation work. If I had to choose the 2 targets these days I would find .NET in general an easier target w.r.t performance. Of course perf isn't everything depending on the domain.


Indeed. Decentralised currency is at least a technology that can power the individual at times, rather than say governments, big corps, etc especially in certain countries. Yes it didn't change as much as was marketed but I don't see that as a bad thing. Its still a "tool" that people can use, in some cases to enable use cases they couldn't do or didn't have the freedom to do before.

AI, given its requirements for large computation and money, and its ability to make easily available intelligence to certain groups, IMO has a real potential to do the exact opposite - take away power from individuals especially if they are middle class or below. In the wrong hands it can definitely destroy openness and freedom.

Even if it is "Open" AI, for most of society their ability to offer labor and intelligence/brain power is the only thing they can offer to gain wealth and sustenance - making it a commodity tilts the power scales. If it changes even a slice of what it is marketed at; there are real risks for current society. Even if it increases production of certain goods, it won't increase production of the goods the ultra wealthy tend to hold (physical capital, land, etc) making them as a proportion even more wealthy. This is especially true if AI doesn't end up working in the physical realm quick enough. The benefits seem more like novelties to most individuals that they could do without where to large corps and ultra wealthy individuals the the benefits IMO are much more obvious with AI (e.g. we finally don't need workers). Surveillance, control, persuasion, propaganda, mass uselessness of most of the population, medical advances for the ultra wealthy, weapons, etc can now be done at almost infinite scale and with great detail. If it ever gets to the point of obsoleting human intelligence would be a very interesting adjustment period for humanity.

The flaw isn't the technology; its the likely use of it by humans and their nature. Not saying LLMs are there yet or even if they are the architecture to do this, but agentic behaviour and running corporations (as OpenAI makes its goal on their presentation slides to be) seem to be a way to rid many of the need for other people in general (to help produce, manage, invent and control). That could be a good or bad thing, depending on how we manage it but one thing it wouldn't be would be simple.


I do think the Digital realm, where the cost of failure and iteration is quite low, will proceed rapidly. We can brute force with a lot of compute to success, and the cost of each failed attempt is low. Most of these models are just large brute force probabilistic models in any event - efficient AI has not yet been achieved but maybe that doesn't matter.

Not sure if that same pace applies to the physical realm where costs are high (resources, energy, pollution, etc), and the risk of getting it wrong could mean a lot of negative consequences. e.g. I'm handling construction materials, and the robot trips on a barely noticeable rock leaking paint, petrol, etc onto the ground costing more than just the initial cost of materials but cleanup as well.

This creates a potential future outcome (if I can be so bold as to extrapolate with the dangers that has) that this "frenzy of talent" as you put it will innovate themselves out of a job with some may cash out in the short term closing the gate behind them. What's left is ironically the people that can sell, convince, manipulate and work in the physical world at least for the short and medium term. AI can't fix the scarcity of the physical that easily (e.g. land, nutrients, etc). Those people who still command scarcity will get the main rewards of AI in our capital system as value/economic surplus moves to the resources that are scarce and have advantage via relative price adjustments.

Typically people had three different strengths - physical (strength and dexterity), emotional IQ, and intelligence/problem solving. The new world of AI at least in the medium term (10-20 years) will tilt the value away from the latter into the former (physical) - IMO a reversal of the last century of change. May make more sense to get good at gym class and get a trade rather than study math in the future for example. Intelligence will be in abundance, and become a commodity. This potential outcome does alarm me not just from a job perspective, but in terms of fake content, lack of human connection, lack of value of intelligence in general (you will find people with high IQ's lose respect from society in general), social mobility, etc. I can see a potential to the old world where lords that command scarcity (e.g. landlords) command peasants again - reversing the gains of the industrial revolution as an extreme case depending on general AI progress (not LLMs). For people who's value is more in capital or land vs labor, AI seems like a dream future IMO.

There's potential good here, but sadly I'm alarmed because the likelihood that the human race aligns to achieve it is low (the tragedy of the commons problem). It is much easier, and more likely, certain groups use it and target people of value economically now, but with little power (i.e the middle class). The chance of new weapons, economic displacement, fake news, etc for me trumps a voice/chat bot and a fancy image generator. The "adjustment period" is critical to manage; and I think climate change, and other broader issues tells us sadly IMO our likely success in doing this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: