Hacker Newsnew | past | comments | ask | show | jobs | submit | robertgraham's commentslogin

The "Washington Game" is described the Society of Professional Journalists. https://www.spj.org/spj-ethics-committee-position-papers-ano...

Citing anonymous sources is not established ETHICAL practice, it's corruption of the system. The roll of the journalist is to get sources on the record, not let them evade accountability by hiding behind anonymity. Anonymity is something that should be RARELY granted, not routinely granted as some sort of "long established practice".

What is the justification for anonymity here? The anonymous source is oath bound not to reveal secrets, so what is so important here that justifies them violating their oath to comment on an ongoing investigation? That's what we are talking about, if they are not allowed to comment on an ongoing investigation, then it's a gross violation of their duty to do so. The journalist needs to question their motives for doing so.

We all know the answer here, that they actually aren't violating their duty. They aren't revealing some big secret like Watergate. They are instead doing an "official leak", avoiding accountability by hiding behind anonymity. Moreover, what the anonymous source reveals isn't any real facts here, but just more spin.

We can easily identify the fact that it's propaganda here by such comments about the SIM farms being within 35 miles of the UN. It's 35 miles to all of Manhattan. It's an absurd statement on its face.


The article you cited does not agree with your assertions. It specifically tells you how and when to evaluate the use of an anonymous source.

If you don't ever use anonymous sources, many fewer people will talk to you. Being on the record about something that will get you fired, will get you fired - and then no one talks to journalists.

What separates actual ethical journalists from the rest is doing everything the article you cited suggests - validating information with alternative sources, understanding motives, etc.


You don't have to use every single source you talk to in your article though. Sure, I will grant my neighbor's dog anonymity but I won't include his opinion in my article at all.


Totally. If there's something to whistleblow then whistleblow, don't just gossip at a bar to a journalist.


> The anonymous source is oath bound not to reveal secrets

When you say this, what oaths are you specifically thinking about?


This blogpost comments on Linus's latest tirade by reading the code.

For example, it explains the specific problem of technical debt on failing to mask of high order bits by casting the low part to (u16). Years from now, when this leads to a bug, you'll no longer be able to fix the macro because inevitable some code will by that time depend upon high order bits in the low part.


My guess is that your original SYN did not go to the target, but was redirected somewhere close by. I'd look at the TTL value in the IP header of your first SYN-ACK, and play with such things as traceroute.

Such redirection is often done on a specific port basis, so that trying to access different ports might produce a different result, such as a RST packet coming back from port 1234 with a different TTL than port 443.

There is so much cheating going with Internet routing that the TTL is usually the first thing I check, to make sure things are what they claim.


The first two answers are 8 and 0.

They are technically `undefined` according to the C standard, but are the behavior of every mainstream compiler. So much of the world's open-source code depends upon these that it's unlikely to change.

Using clang version 15.0, the first 2 produce no warning messages, even with -Wall -Wextra -pedantic. Conversely, the last 3 produce warning messages without any extra compiler flags.

The behavior of the first two examples are practically defined even if undefined according to the standard.

Now, when programming for embedded environments, like for 8-bit microcontrollers, all bets are off. But then you are using a quirky environment-specific compiler that needs a lot more hand-holding than just this. It's not going to compiler open-source libraries anyway.

I do know C. I knowingly write my code knowing that even though some things are technically undefined in the standard, that they are practically defined (and overwhelmingly so) for the platforms I target.


> Now, when programming for embedded environments

Something that a lot of C programmers do...

Most people who write for typical desktop and mobile computers don't do C. They tend to do C++ or other, higher level languages. Those who write C tend to do either quirky embedded code, or code that is highly portable, in both cases, knowing about such undefined or implementation defined behavior is important.

If you intend on relying on such assumptions, make it explicit, for example using padding, stdint, etc... On typical targets like clang and gcc on Linux, it won't change the generated code, but it will make it less likely to break on quirky compilers. Plus, it is more readable.


You start off confident that the first answer is 8.

Then you admit that the microcontroller world presents exceptions.

You've now arrived at "I don't know" the answer.

The article never said "using mainstream C compilers".


The first 4 are implementation defined rather than undefined.

That said, warnings do not necessarily mean that the code is invoking undefined behavior. For example, with if (a = b) GCC will generate a warning, unless you do if ((a = b)). The reason for the warning is that often people mean to do equality and instead write assignment by mistake, so the compilers warn unless a second set of braces is used to signal that you really meant to do that.


In the cases involving overflow, it's implementation-defined whether there's undefined behavior.


> The first 4 are implementation defined rather than undefined.

Third and fourth are only defined in some implementations.


That is fair for 4, although would explain why it is the case for 3?


If char is signed and ' ' * 13 is bigger than CHAR_MAX, you get UB by signed overflow.


Every mainstream compiler targeting a 32 or 64 bit platform.

Have we crossed the point yet that the majority of new microprocessors and microcontrollers sold each year are 32+ bit yet? Most devices I’m familiar with still have more 8 and 16 bit processors than 32 and 64 bit processors (although the 8 bit processors are rarely programmed in C).


>no warning messages, even with -Wall -Wextra -pedantic

A better test is -Weverything.


Nobody (yet) has mentioned Microsoft PWB - Microsoft's Programmers Workbench for their C compiler, around 1990. It's what all the Microsoft engineers themselves used when writing code for Windows, WinNT, OS/2, etc. It was essentially perfect for its time.


Ethernet doesn't use TCP/IP. Ethernet is it's own network. It has nothing to do with TCP/IP.

Other things use Ethernet. Routers, when connected to each other, often use a local Ethernet network to communicate.

Think of the TCP/IP Internet as it's own network, ignoring how routers physically talk to each other. Sometimes it's a directly link, a wire. Sometimes it's carrier pigeons. Sometimes it's WiFi. Sometimes it's Ethernet. Whatever it is, it's local to between the routers and does not extend outside that.


I think my confusion arises that, when I access some other PC on the local network, I connect to an IP and port. That's not on the wide internet, but it uses IP to communicate. Therefore... where's the ethernet's role in that?


I use RFC 791, the original Internet model.

RFC 1122 is a retconned version of the Internet model that tried to change terminology to fit OSI.


Yea, it's not about engineers constructing systems. I mean, engineers do frequently pretend their creations fit the OSI model, but they work backwards to make it appear to conform to orthodoxy.

The issue is about education. People teach the model, or some variation of it. It teaches misconceptsion, such as how Ethernet and the Internet are integrated in a single network stack rather than being independent networks. It leads to professionals in IT and cybersecurity who continue to hold this misconception.


> It teaches misconceptsion, such as how Ethernet and the Internet are integrated in a single network stack rather than being independent networks.

It really opened up a whole world of understanding for me when I decided to go look up "RFC 1" to see what it was about.

Reading that — and the few low-numbered RFCs after it — made me realize that "the birth of the Internet" as we know it, was essentially the moment of the deployment of the first network switch (the BB&N IMP), isolating physical networks' electrical properties and collision domains from one-another and using DSPs to arbitrarily re-write packets between different signalling standards — thus rendering uniformity of physical/electrical media and LAN signalling standards, completely irrelevant.

Until that moment, I had always thought of "the Internet" as a standard for LAN networking that grew like a social network until it overtook the world — where things like "Ethernet" and "TCP/IP" were the "Internet flavor" (DARPA flavor?) of those LAN-networking technologies; where "the Internet" was competing with other LAN technology suites, the likes of ChaosNet or AppleTalk or NetBEUI; where people gradually "switched over" from using whatever networking equipment and signalling protocols they had been using, to using Internet networking equipment and standards; and where the fact that people were finally settling on the same networking protocols across multiple Autonomous Systems, allowed them to finally yolk those systems together into inter-networks, with more and more of that happening until we had one big hierarchical LAN called The Internet.

But no! The whole clever thing about "The Internet" is that it didn't do that! It just took all the random proprietary networks that people had built, and connected them together as black boxes, by coming up with a set of standards for how the networks would speak to one-another at their border gateways, and leaving everything else up to implementation, with the assumption of border-gateway routers being implemented by each LAN-technology-vendor to translate between "Internet" signalling and whatever that LAN was doing!

And, in that view, the whole "layer separation" concept — of there being such a thing as an "IP packet" that bubbles up to userland separately from any delivery enveloping — wasn't fundamental to The Internet; in fact, existing protocols that were vertically integrated continued to work, being rewritten into something else when they reached the AS border gateway. The "layer separation" was an optimization to allow new "post-Internet" protocols to be passed "transparently" across AS border gateways without those gateways needing to know about them to rewrite them.

Rather, these "post-Internet" protocols, consisting of separate "LAN envelope" and "Internet payload" parts — and designed with trancieving logic on the endpoints such that the "Internet payload" could traverse the [lossy, laggy] Internet intact — could simply be re-enveloped from "LAN packets" into "Internet packets." And the responsibility for constructing/parsing the "LAN envelope" part would be taken away from userland, made the responsibility of the OS, so that "post-Internet" applications could be portable between computers that used different LAN technologies but wanted to speak the same Internet protocols.

But, of course, network stacks continued to support non-layer-separated communication for decades afterward the advent of The Internet; and AS border gateways continued to support rewriting these protocols "at the edge" for just as long.

It wasn't until much later that LAN networking equipment truly became commoditized. (I have a Windows 2000 manual that describes its support for token-ring networking. Windows 2000!) In a fully post-Internet era, when everyone is using Internet protocols, a "network" could no longer offer much to differentiate itself. So we started to see shifts to actually standardize on LAN technologies, with vendors all moving toward making the same stuff. At that point, border gateways began getting simpler, and companies like Cisco that had made their fortune in the AS-border-gateway space stopped being household names, instead being relegated to the NOC (as they had successfully pivoted into dumb-but-high-throughput enterprise LAN switching, and even-dumber-but-even-higher-throughput Internet backbone switching.)


It's wrong to think of them as different functions of the same network. Instead, they are differnet networks.

Ethernet and the Internet both provide the function of forwarding packets through a network to the target destination address. The MAC protocol and IP protocol provide the same theoretical function.

Where they differ is that Ethernet was designed to a be a local network, whereas the Internet is designed to be an internetwork. All it demand from local networks is that they get packets from one hop to the other. Ethernet does this for the Internet, but so do carrier pigeons.

The same is true for HTTP. Instead of thinking of it as a component of the network, think of it as something that rides independently on top of the network. In a hypothetical future whe we've replaced the Internet with some other technology, the web would still function.


> Where they differ is that Ethernet was designed to a be a local network, whereas the Internet is designed to be an internetwork.

Then your criticism of the first three OSI layers is merely that they should be named "point-to-point", "local", and "inter".

I can agree with that criticism. But I can't agree with needing 239 pages to convey it.


You can't. Nobody knows what the session layer does. Most falsely believe that layer handles sessions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: