Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The attack is initiated with packets that are outside the bounds of normal expected operation. While an ideal engineer set up would provide a thorough suite of negative tests to ensure those edge cases are covered, it’s also fair to say that these kinds of bugs are the easiest ones to miss. So I think describing it as “next level negligence” is rather uncharitable. From experience writing software and running software development teams, bugs like these are inevitable and always more obvious in hindsight. The problem with hardware is you can’t always just patch fault.


To me your position would be defensible if you were talking about, say, a crashing bug in a phone app. For a security product with privileged network placement, it’s like getting salmonella from a restaurant and then being told that food safety is easy to forget when you’re busy.


> The problem with hardware is you can’t always just patch fault.

That is the part that is "next level negligence". If you're making "security products" and you are not able to cover basic security - i.e. have a working update process and communicate to your customers that they should actually use that working update process and not actively disable it - then you failed at your job.


They're not making security products. They're making "security" products.

Imagine you're working on products for airport security. Should you focus on stuff that might actually be useful to improve the security of the airport and planes? Or, should you focus on useless security theatre for the well-funded TSA and similar entities?

You won't find these "security" products in environments where actual security was crucial and the operators understood what they were doing. What you see there is Zero Trust, low friction but effective authentication, and occasionally actual real air-gapping, because somebody says "It would be easier if this wasn't air-gapped but that's inherently unsafe and the risk is unacceptable, so we're doing it the hard way".

But lots of places want to play pretend and for them a "security" product is perfect, it checks off the box for the relevant CxO role and causes some level of irritation while making no practical difference to actual security. Perfect.

Reminds me of the first question you ask about a "Secure" site (e.g. a data centre) to understand if they really mean "Secure" or not. "Who cleans this place?"

If the answer is "Nobody, it's a mess" or "sigh we all have to take turns" then maybe it's actually secure. But if there's some bullshit about "vetting" of staff paid minimum wage to go wherever they want, unremarked, carrying large black bags of unknown objects then the facility is not, in fact, secure.


Serious DPI vendors should really implement a proper state machine, so that they can't be fooled that easily. But middleboxes are not "security products", they can't be.

"Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection" was published in 1998 [https://apps.dtic.mil/dtic/tr/fulltext/u2/a391565.pdf]. We should know that DPI is not reliable.

In fact Geneva is a research project that expands and extends the concepts of fragrouting, applying a genetic algorithm to automatically find flaws in censoring middleboxes [https://raw.githubusercontent.com/Kkevsterrr/geneva/master/e...].

It is expected that a research project of this type exposes these kind of bugs. And the reason why they can research these things is because "there isn't enough information on the wire".

Bugs of this type are egregious for their danger and simplicity, but patched these there will always be.


I would characterize that differently: they are security products — that’s how they’re marketed – but they aren’t perfect and are only effective against certain behavior. That means you can’t rely on them alone for everything but it doesn’t mean they don’t have a security function.


My issue is exactly with how they're marketed and sold.

Information theory proves there's an infinite number of ways in which you can codify something. The subset of encodings that meets the rules imposed by any middlebox is in turn infinite.

> they aren’t perfect and are only effective against certain behavior

This means that they are only effective against default behavior.

Anything else is out of scope for these products, which I think is what @laumars was referring to with "outside the bounds of normal expected operation".

Marketing sophisms can be fun, but defining something as a "security product" when it is mathematically proven that there are infinite ways to bypass the provided "security guarantees" is ... simply something I refuse to do.


> the easiest ones to miss. So I think describing it as “next level negligence” is rather uncharitable

On the contrary, mitigating amplification attacks is security 101.

And the middleboxes are sold as security products.


We are talking about following well understood and published standards, such as TCP and IP. The people implementing those stacks were either negligent, or were consciously cutting corners and safeguards that those open standards had already in place. The result: lots of network pipes can be subverted by crackheads into flooding innocent netizens.

No, I got no sympathy for the people who built and sold those devices.


Have you actually gone through the TCP/IP specification and implemented everything securely?

It’s nearly as simple of specification to get right as your post suggests.


Getting the 3-way TCP handshake and decrementing TTL is quite easy to get right. Those are very much foundational properties of the respective protocols. We are not talking about obscure edge cases.

Also, building a bridge is not simple either, but it's a well known and well solved problem. When a bridge crashes we don't just shrug and wave at the construction company with "It's OK, bugs happen".


It’s also very easy to get wrong.

How many bridges are built to survive abnormal conditions like earthquakes, tidal waves, or even just a lorry driver smashing into the roof of a low bridge? Some of my closest mates are actually structural engineers specialising in bridges so coincidentally I happen to know a lot on this topic and the answer is, outside of surprisingly few countries, most bridges aren’t designed to carry any more than expected load. Some bridges aren’t even strong enough to carry heavy goods vehicles of present day. Hence why so many bridges have instructions upon approach detailing to drivers about safe and correct usage.

However you can’t really compare software to bridge building. There’s thousands of reasons why the two aren’t the same.


You can’t really compare this to bridge building. There’s thousands of reasons why the two aren’t the same.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: