Hacker Newsnew | past | comments | ask | show | jobs | submit | elevation's commentslogin

The .COM bubble was more than a divot because .COMs in so many industries employed so many people. There was amazon.com, but also pets.com, lowermybills.com, gateway.com. But if our economy somehow loses access to AI (rationing due to wartime efforts? sabotage by a foreign nation? simply not enough grid power to turn them on at the price people are willing to pay?) I would probably need to hire more coders to get the equivalent work done.

AI is driving trades, materials, real estate, all sorts of downstream stuff.

The rest of the economy is dead. Oracle is dead without OpenAI. Remember that unlike the dotcom, none of these companies are public. So when it pops, you’ll see private credit and PE funds implode, which could bring down banks with unhedged exposure. The headlines talk about JP Morgan (which likely has the risk managed), but regional banks got into that nature in the last couple of years in a big way.


Did amazon.com go bust? Seems like I heard they were still in business as of a couple of years ago at least.

The point was that Amazon wasn't independent from the frenzied, leveraged land grab that characterized the .COM bubble. Like many other companies, they were hiring aggressively until the bubble burst. Whether or not the companies went bust, a lot of people lost their jobs in a short time.

10 years ago, a non-technical friend gifted me an eMachines tower that no one bought from his uncle's estate sale. I loaded with ubuntu server, racked it up and ran a business off of it, storing some backups, generating 500+ daily customer-requested database reports, and generally kept the CPU busy running batch jobs, builds in docker, etc. I kept it running on a UPS for years until the hard drive errors force the kernel to mount it r/o. I might have kept it going, but I had a replacement on standby.

It's replacement is another cast-off, uses less electricity and is much more capable, despite not qualifying to run windows 11.


16MB still seems massive for this kind of app. I ran Visual Studio 4, not an app, but an entire app factory, on a 66MHz 486 with 16MB RAM. And it was snappy. A TODO list app that uses system UI elements could be significantly smaller.

What do I gain if more developers take this approach? Lightning fast performance. Faster backups. Decreased battery drain => longer battery service lifetime => more time in between hardware refreshes. Improved security posture due to orders of magnitude less SLOC. Improved reliability from decreased complexity.


16MB is less than a display buffer for a 4k display. It is never ever going to happen again just due to hardware realities.

Less RAM usage doesn't equal better performances or faster software. It actually might mean the opposite, if you're not caching things in RAM.

If a TODO list app has more than 16MB of data it could possibly cache in RAM then there is already something seriously wrong.

I wonder if a "woot" style service could work. If 10K like-minded consumers made a group-buy every 2-3 years, a high-end panel vendor might be willing to provision a new SKU with a few firmware tweaks.

For a while, Costco had a reputation as the place where you could buy a TV and be confident that it was usable as a "dumb" TV. The rumor (unconfirmed as far as I know) was that, among the customizations that manufacturers would make for retailer-specific models, the Costco ones included firmware tweaks to pull back on requirements for things like mandatory connectivity, account creation and the like.

I'm not sure how true any of that is, but in any case Costco still has a reputation as a place where it's easy to return a TV, and they pay attention to the stated reason for return.


I would go in on a group-buy dumb TV, but not every 2-3 years.

> caveat emperor

s/emperor/emptor

I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.


Keep dreaming!

The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.

In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.

The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.

LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.


I'm just going to keep building software mostly traditionally, while using "AI" to help me research things quicker (might as well use it while it's here), survive the shitpocalypse, and then laugh as traditional-minded developers become a scarce sought-after resource again.

Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.


> Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.

Whatever you do, don't click this link: https://github.com/garrytan/gstack/


I think this is where a lot of freelance contractors could pivot to - basically "last mile" coding, where the LLM does the front end work, and then high hourly pay engineers come in and fix the work. it'd still be cheaper than a lot of the industry niche software that is usually pretty bad.

thanks for the correction

I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.


None of the top cyber security talent I've worked with went to school for it, and I have been underwhelmed by what I see coming from college programs. These kinds of credentials themselves are not a signal of quality to me.

>The kinds of credentials themselves are not a signal of quality to me.

i hear this online a lot but never from the companies and hiring managers that hired our cybersec students for the last decade.

keep in mind, this is not a 6-month "intro to cybersec" or bootcamp-style program.


Goodwill with hiring managers is good. But in a down economy it'd be helpful to boost your reputation more broadly.

If I were running your college's program, I would invest in a presence at Defcon. If just one your students could use their skills to uncover and present something genuinely interesting, it would be worth covering their airfare and accommodations just to get your logo on the screen. If you could do this every other year, your program would have an unparalleled brand.


>Goodwill with hiring managers is good. But in a down economy it'd be helpful to boost your reputation more broadly.

part of our success over the years has been due to our reputation building, presence at local/state/national conventions, etc. that is exactly why the sudden downturn in hiring has been eye-opening.


Wireguard exemplifies the superiority of a qualified independent developer over the fractal layers of ossified cruft that you get from industry efforts and compliance STIGS.

So it feels wrong to see wireguard adapted for compliance purposes. If compliance orgs want superior technology, let their standards bodies approve/adopt wireguard without modifying it.


> fractal layers of ossified cruft

Someone got a thesaurus in their coffee today! (Not a jab)


but wolfssl is in the business of selling FIPS compliance so…

And they do it fast, thankfully Compliant Static Code Analyser catches issues like https://github.com/wolfSSL/wolfGuard/commit/fa21e06f26de201b...

Holy shit. Those are rookie mistakes[1], that could end up being SEVERE.

[1] Not referring to the fixes.


looks like AI to me. It’s always making rookie mistakes that look plausible!

No, I mean, for example uninitialized pointers are a huge red flag, so seeing one not set to NULL is honestly shocking, especially in crypto code where a stray pointer can lead to crashes or subtle security issues.

Yes, but be aware, openvpn is much better if you live in a Country like China, Russia and a few others. That is due to a known design issue with wireguard.

For most people, wireguard is fine.

Edit: I should have said "choice" instead of "issue", but Firefox 140 is failing on this site so I could not correct the txt. I was able to edit this after reverting back to Firefox 128.


Could you expand on the design flaw in question?

OpenVPN looks like a regular tls stream - difficult to distinguish between that and a HTTPS connection. WireGuard looks like WireGuard. But you can wrap WireGuard in whatever headers you might want to obfuscate it and the perf will still be better.

It's trivial to make WireGuard look like a regular TLS stream. It's probably not worth a 15 year regression in security characteristics just to get that attribute; just write the proxy for it and be done with it. It was a 1 day project for us (we learned the hard way that a double digit percentage of our users simply couldn't speak UDP and had to fix that).

It is, we did the same. It is a shame that only Linux supports proper fake TCP though.

Doesn't the Chinese firewall perform sophisticated filtering? Fake TCP should not be difficult to catch. I recall reading how the firewall uses proxies to initiate connections just to see whats up.

You can host a decoy on the server side.

I don't suppose you'd release it, please?

It's part of `flyctl`, which is open source.

>OpenVPN looks like a regular tls stream - difficult to distinguish between that and a HTTPS connection.

I thought openvpn had some weird wrapper on top of TLS that makes it easily detectable? Also to bypass state of the art firewalls (eg. China's gfw), it's not sufficient to be just "tls". Doing TLS-in-TLS produces telltale statistical signatures that are easily detectable, so even simpler protocols like http CONNECT proxy over TLS can be detected.


Raw OpenVPN is very easy to distinguish, its handshake signature is very different from the regular TLS.

OpenVPN is fine if you want to tunnel through a hotel network that blocks UDP, but it's useless if you want to defeat the Great China Firewall or similar blocks.


It is not a design flaw, but a design choice.

>OpenVPN does not store any of your private data, including IP addresses, on VPN servers, which is ideal.

https://www.pcmag.com/comparisons/openvpn-vs-wireguard-which...


> For most casual acquaintances

You may feel this way, but it feels a lot different when you learn that one of your acquaintances has died.

I enjoyed a brief intellectual conversation with a professor at the end of a semester. When I returned the next academic year, I stopped by his office for a quick chat, but his name was no longer on the door. The department administrator told me "Oh, he's no longer with us."

My heart sunk. I didn't know him well, he may not have remembered my name, but I wanted to thank him, and now he was gone. Cut down in his prime? He was just an acquaintance to me, he was not my friend. But I still felt that shock and grief deeply.

I asked the administrator how he'd died, and she quickly clarified: he was still alive! He had just been a guest lecturer visiting for one semester from a Scandinavian university and had now returned home. This has taught me not to delay expressing my gratitude for the acquaintances in my life.


There's a been a few similar instances in my life that have led me take up the personal practice of "Always say hi or wave to friend when the chance comes around, because there may not be a next time". It came about because I tend to see a lot of close friends and looser acquaintances on a day to day basis physically in the world, and there used to be more times than not where I wouldn't bother crossing the street or stopping for a minute to chat. Later I realized this costs me almost nothing, and even for less-close relationships, I'd prefer to have put in the tiny amount of effort to walk up and show them they're worth even that much before they overdosed or moved away or committed suicide. It's not always opportune, but what else is life for?

Granted, in retrospect, there's not really ever a sufficient amount of interaction you could have had, but if I see someone inside a cafe that I'm walking past, it's worth popping in and at least saying hi or waving from outside.


Looks like an arbitrary validation cap. By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.

> By the time we're maxing out the 64-bit underlying representation we probably won't be using Ethernet any more.

We will be using Ethernet until the heat death of the universe, if we survive that long.


https://en.wikipedia.org/wiki/Ethernet#History (& following sections)

Calling something "Ethernet" amounts to a promise that:

- From far enough up the OSI sandwich*, you can pretend that it's a magically-faster version of old-fashioned Ethernet

- It sticks to broadly accepted standards, so you won't get bitten by cutting-edge or proprietary surprises

*https://en.wikipedia.org/wiki/OSI_model


I recently built a brilliant work in my domain by directing LLMs. I used C++ and a knowledge of data structures and algorithms to achieve 3+ orders of magnitude of speed up. I generated custom data structures which I designed to have the fewest instructions possible in the hot path. I used novel non-locking communication schemes between threads.

The LLM did all the coding, but came up with none of this by default. None of the optimizations I envisioned emerged up when I prompted “how could we make this faster?” If I hadn’t been an expert in my field the output wouldn’t have been useful.

Keep studying!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: