Hacker Newsnew | past | comments | ask | show | jobs | submit | more elpool2's commentslogin

Do you have a link to the tweet from Sweeney saying he would breach the contract?


No. Graham says it’s this one [1]. I see no threat of breach, so if that’s in fact the tweet, Cook is off his meds.

[1] https://twitter.com/paulg/status/1765431238985187525


I think everything you said was fair, but you also mentioned Twitter being a conservative cesspool, and a lot these features like federation and composable moderation are designed to help prevent the whole "rich guy buys the company and turns it into something you don't like" scenario.


I don't see how - I'm not sure how Bluesky works but there must be moderation - otherwise the whole website would succumb to bots and gorespam, so there are people in charge who decide what you get to see.

If the end result is politically unbiased, it's due to their conscious decisions, not some magic algorithm.


I think moderation is per server/community - like mastodon (or, conceptually, reddit)

As opposed to centralized moderation (twitter, FB, IG, etc.)


This is not the case, moderation is decoupled from each server. Users choose how they want moderation to work, and can share those tools with others.

See here for more: https://news.ycombinator.com/item?id=39471973


Federation is nice but when the platform only does one-third of what the platform you're trying to leave does then the whole thing feels like a toy


It's really unfortunate that the tech companies set the precedent in the first place by pushing hard political agendas into their policies and moderation biases. If it was truly neutral in the first place we would not be having this conversation. All the people complaining only now about Twitter doing this are part of the problem.


Of course people are going to complain about content they don't want. That's the product. Twitter changed their product to deliver different content, so its audience has changed.

Calling it 'The Problem' like climate change or the national debt gives it too much power. Just use something else. People use group chats for real relationships now anyway.


FYI a government can’t borrow a currency it issues, there is no national debt (or all money is debt).


There is no "true neutral" when it comes to moderation. There are a million examples, but the most obvious are of the form "you can have group X or people who hate group X and are dedicated to driving them off the platform". Somebody's not going to have "free speech" in that case. And even if you go for what most "true neutral" advocates want, which is a lack of rules, you'll quickly find that quite a lot of people don't want to hang out at the place that's filled with Nazis or scam artists or spammers or whatever.

So in practice you have to make choices, or you'll end up running the new 4chan and being sad about your life. As happened to the guy who ran the old 4chan: https://www.rollingstone.com/culture/culture-features/4chans...


True neutral means moderating consistently, judging behavior without regard to identity.

The ideal, platonic version of this would be that moderators only see an "identity scrambled" version of each tweet/post when they make their moderation decision. Like a screen that blinds orchestra musicians when they audition, the human would see a statement like "I hate New Yorkers" and not know if the original message said "I hate New Yorkers" or "I hate Floridians." So they would have to make a decision based on the general principle of whether a statement of this form is allowable.

Anywhere you want to draw the line is fine with me, as long as you draw it consistently.


That sounds like a very personal definition of "true neutral". And also an unworkable one.

Take the use of reclaimed slurs, for example. When used against the discriminated group by a dominant group, their intention is often to cause harm. When used within the group, the intention is to reappropriate the term: https://en.wikipedia.org/wiki/Reappropriation

Similarly, harassers will use terms in ways that are plausibly read different ways depending on who they're talking to. So something that might sound innocuous or just odd when directed at me will be correctly read as a racist attack when directed at somebody else.

And that's not even counting when they'll just come up with new terms so they can be awful in ways that are novel enough that automated filters or out-of-date moderators won't catch. E.g.: https://www.vice.com/en/article/bv88a5/white-supremacists-ha...

In short, because there's a great deal of identity-based hate in the world, identity-blind moderation ends up being an aid to the identity haters out there.


The element of moderation that you consider essential -- the latitude to apply subjective judgments that rely on knowing the specific identities of the participants -- is precisely the element that I do not trust moderators to perform.

That this moderation strategy would prevent the use of all slurs (even reappropriated ones) sounds like a feature to me, not a bug.


"That this moderation strategy would prevent the use of all slurs (even reappropriated ones) sounds like a feature to me, not a bug."

You're proposing erring on the side of censorship to avoid some gray areas. While this is a reasonable position, it doesn't satisfy some ideal of neutrality and won't really avoid the gray areas, and so still would require subjective judgement.


For sure. While at the same time allowing the more clever variety of abuser to sail on past.

In practice, almost any nominally "neutral" position ends up allowing an enormous amount of abuse. Which is why you'll see most platforms that start with a free-speech maximalism approach coming up with a lot of nuance and exceptions over time. And those that don't turn into cesspools.

Most people are pretty great, but moderation has to be built for the worst-case attacker.


If detecting abuse requires knowing the identities of the people involved, it sounds like another way of saying that some behaviors are fine if they are directed at certain people, but "abuse" if directed at other people.

Which is ultimately what I object to.


No, I'm proposing erring on the side of consistency. I think it's likely that this strategy would result in less "censorship" in some cases, and more in others.

What we have now is a system where, on many platforms, moderators often put their thumbs on the scale and decide that certain groups need more protection than others. Generalizing about or disparaging certain groups is ok, but the sensitivities of other groups are considered sacrosanct and must be deferred to.

Like I said, draw the line anywhere you like. If it applies to everyone equally, I am happy. I am fine with things that require subjective judgment, as long as that subjective judgment is behind a screen that conceals identity.


And also “what’s a slur” alone is very subjective. For instance on Twitter Elon has decided “cis” and “cisgender” are slurs, but “trans” and “transgender” aren’t. But in gender discussions the terms come up all the time, and they are just terms.

Moderation is full of gray areas, and they are unavoidable.


truly neutral = post anything? there exist such platforms and they're cesspools because human nature


[flagged]


Stop with the straw men. That has nothing to do with the backstory of why he bought Twitter.


[flagged]


You sound mad. Get some air.


[flagged]


How do I block low iq trolls like jrflowrs on this platform? I tried to search the FAQ but couldn't find anything.


It will be interesting to see how far the anti-steering ruling actually goes. Will Apple still be able to block links to a alternative payment options? Or what if the link contains a token that logs you in automatically and goes straight to a payment form that’s almost indistinguishable from an in-app form?


"Developers may apply for an entitlement to provide a link in their app to a website the developer owns or maintains responsibility for in order to purchase such items."

https://9to5mac.com/2024/01/16/apple-revises-us-app-store-ru...


Thanks and wow, they’re really doing everything imaginable to make it difficult to include such a link. And their rules say the link cannot contain any additional parameters so it rules out the possibility of just linking directly to a payment form. You would have to log in first, then pay.


Devs just won't add links when informing their users that it's cheaper on their website and Apple can't do anything about it. They are not allowed to ban it according to this ruling.


What is there to enforce? There are no duties, obligations, or punishments in the law, it just provides an affirmative defense for lawsuits.


The issue is that the way this is written the AI doesn't have to be responsible for the liable content, it just has to be involved. If I post something defamatory on HN, and HN helpfully checks my grammar, then HN is no longer protected by section 230. The language isn't precise enough. Maybe a court would interpret "involved" to mean "materially contributed to the illegal nature of the content", but maybe not?


Yeah but this video is clearly just transformers blowing. A block of street lights goes out the exact same time as one of the flashes.


The part you quoted and called bullshit is factually correct though. CAN has told Meta and Google that if they show links to news sites (which do drive business to those sites) then they have to pay those news sites. It’s not a characterization, it’s what has actually happened. How can you deny that?


The part that’s bullshit is this:

> they have to pay the newspaper for sending business to the newspaper

That’s an opinion.

Another way to explain the situation is: they are being forced to pay for content that they’ve monetised at the newspaper’s expense.

There are several other ways to view this situation, but Ben Evans has decided to push this version, which makes it sound like big tech is being somehow generous by sending traffic in the first place.

In fact, Google and Meta in particular have been pushing news producers against the wall for years.


They have to pay for showing links, and those links do send traffic to news sites. That is indisputable.

You’re phrasing is just incorrect, honestly. Linking to a website is not at all taking the site’s “content”. And monetizing search results doesn’t do anything “at the newspaper’s expense”, what expense has a news site incurred by someone sharing a link to their site?


There are different degrees of “secure”, right? Maybe you could give a key to the FBI without China getting ahold of it, maybe it’s still “secure enough”. But you can’t say it’s “just as secure” as not giving it to them. And that’s what law enforcement often asks for: Give us access without making it any less secure.


> And that’s what law enforcement often asks for: Give us access without making it any less secure.

No, I don't buy that. Maybe I'm going out on a limb here but I'm gonna say this is almost certainly a strawman caricature of what they're being asked, not what they're actually being asked. Law enforcement isn't stupid and people (especially law enforcement) understand that pretty much nothing in this world works in absolutes. They probably don't think the decrease in security is significant, but everyone (heck, even a kid) understands that the more people have access to something, the less secure it is.


My company has been fully remote since 2020, I agree that collaboration is still easy. The sort of meetings where we used to sit in a conference room and brainstorm on a whiteboard just now happen over Teams with screen sharing.

I do think it's much easier when everyone is remote though. It sucks when half the people in a meeting are in a conference room and the rest are remote. You end up dealing with dumb technical difficulties and its too easy to exclude virtual folks from the conversation.


I’m on Google/meta’s side because link taxes are a bad idea for the internet as a whole. That anyone (even Meta, who I generally dislike) should have to pay to link to a news article is morally offensive to me.


I've skimmed the legislation multiple times, and the only thing close to a "link tax" is the theoretical ability of the government to create one if that's what's deemed fair.

I've yet to see anything that explains how a negatiations framework translates to a "link tax", but explaining it wasn't Google's goal when it coined the term to fight the bill.


Because one of the things the negotiating framework covers is how much money google and meta must pay to news providers when news content is “made available” to canadians, and “made available” specifically includes when “access to the news content, or any portion of it, is facilitated by any means, including an index, aggregation or ranking of news content”. So if the negotiated value is anything over $0 then, yeah, it’s a “link tax”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: