I think the echo chamber one misses the point (or maybe the analogy of an echo chamber is bad).
It's true that people are exposed to other views, but there is still the echo effect of what their bubble thinks of those other views. Two people discussing their differences is not the same as two (or N) bubbles hurling outrage at one another, which I think is closer to most of twitter - the echo chamber still applies within your bubble and its criticism of others
The vast majority of people's exposure to other views are just the dumbest iterations of said views, for the express purpose of "dunking" on them to further calcify their previously held positions also. At least on twitter the goal is to find morons on the other side of the ideological divide and mock them.
I am not talking about obvious (and easily labeled morons), but instead there are many educated and real scientists that raise questions and there is no dialogue, but instead only defunct-fact checkers totalitarian response.
Science is always open to listen, discuss and stand open to be corrected and in no time in history there was "attack on me is attack on science" attitudes that ended up be right or productive.
It is circus in all places without this approach and open dialogue.
It's 280 now, but the point is the same: The system has been engineered from the ground up to create an environment that ENcourages insipid repetition, and actively DIScourages insightful discourse. I fear this will somehow get worse with Jack's departure.
Kind of reminds me when one of the foremost mRNA researchers, Dr. Robert Malone was temporarily banned on LinkedIn regarding his views on the vaccine. While speaking at a JRE interview, he says later on after the ban, LinkedIn (probably after public backlash) sent him an apology letter, quoting they could not assemble a team (or they themselves were not) qualified enough to fact-check him. Let that sink in for a moment...
When did we turn to social media spaces to get our health facts from. Fact checking from big tech is the silliest thing normal people have embraced on the Internet. Kind of loony when you think of the historical context of the internet.
I also don't understand that position. If somebody doing research in a certain domain comes to talk about that domain, then whatever they say will be relevant. Even if ill-intentioned, they will know their stuff and come up with a valid critique which needs or needs not to be taken into consideration. Now if a neuro surgeon - even if genial and successful - talks about vaccines, there's already a serious chance the topic goes astray, and the further a person removed from field is the lower the trustworthiness of their comment. I know, there are exceptions to this rule, and everyone thinks they are that exception...
I think that stretches the definition of 'echo chamber'. That's simply what a community is, a group of like-minded people with shared views.
There's nothing automatically wrong with this. People can have shared criticisms of others if there is some meaningful difference between the groups that justifies that disagreement. I think people mistake 'hurling outrage' with simply two (or more) groups being fundamentally in disagreement.
The answer to this is actually the opposite of what people commonly suggest. Just let people with irreconcilable differences go their own way which is why I like the researchers suggestion to improve tools to shield oneself from hate.
Instead of demonizing so called 'echo chambers' I think we should just call it what it is, freedom of association.
"echo chamber" completely misses the point: exposure to outrageous posts is the goal.
If you want constructive dialog that builds towards an actionable plan a dedicated forum and a wiki would be the right tools.
Its just dumb to use a platform that exposes your imagination to people who think the exact opposite - and does so on purpose!
Band aid solutions where these fools get to have their precious walled garden internet empire and eat it too are just never going to work.
You post a picture of your dinner on facebook and some are going to think it looks disgusting. One vegetarian is going to take time out of their busy day (30 seconds) to write a couple of sentences describing your plate full of boiled corpses. Your bestie says it is mean to say that. RE: It is mean to eat animals. You have a hard time deleting your besies comment while they summon the cohords to deal with this vegitarian picking on you. 6 months later and you are still talking about it.
I agree that echo chambers aren't bad by default, the issue is they commonly fall into groupthink patterns. If the subject doesnt have a strong tie to reality it's easy to trend towards extremism.
It is basically incorrect. Most people in these ingroups are hesitant about sharing things in public. They know it is not ok. The most communication is done within closed safe environments. I have several sect-members on FB and the total noise they make in my feed is very low.
It seems plain as day to me that many people are far more hateful online than in real life, mostly under the disguise of anonymity and/or the physical divide. You can say some truly nasty things online that in the physical world would make you wake up in a hospital. There's no real correction mechanism online.
I can't believe the research doesn't address the point that some others here are making as well: the most harmful content is promoted, to the point that you exclusively see harmful content, which is then normalized. Social media promotes the crazy and silences the reasonables.
Not a word on the massively increased speed of information. There is no time to refute any point because the damage is done in minutes and garbage spreads, and then the next one comes.
Not a word on the complete lack of trust in information itself. In media, science, even simple verifiable facts. It doesn't seem to matter any more, people just make up their own facts.
And then the solutions:
"Invest in remedying the offline frustrations that drives online hate"
Ah, ok then. The solution is to just improve the world.
Most research shows that anonymous accounts actually facilitate more conciliatory and nuanced discussions.
The big issue in social media is that the moderates are attacked by both sides. They are often attacked more viciously by those of their own political persuasion that are even more to the left/right than they are. This especially occurs if they are agreeing at all on any point with “the other”. Obviously this has a substantial chilling and muzzling effect.
Anonymous accounts have less face to lose in the real world if they take a more moderate stance. However, over time even anonymous accounts build social credit with the in-group of whatever ideology they support.
This has many negative effects. One is that politicians get the most visible engagement on their posts from the most extreme followers and so the politicians themselves moved to more extreme positions because they think they don’t have any moderate support. Twitter is not real life.
Indeed. And not just politicians, also traditional media, which are now hard to tell apart from "normal" Twitter accounts. They're all in on fast juicy action for maximum engagement.
So you can't trust your peers, politicians or traditional media. They're all Twitter-like now.
Maybe it would be worth repeating the 'myths' he lists in the tweets:
Myth 1: A lot of misinfo on social media
No, research suggests there is little, shared by few & having small effects. Those sharing misinfo are not dumb. But they have intense political animus, which motivates to share what fits their worldview, true or false.
Myth 2: Social media makes people hateful
No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline. The hateful are few in numbers but they are attracted to politics and, hence, are much more visible.
Myth 3: Social media are echo chambers
No, research shows that, for most, social media breaks the bubble. We are more connected to "the others" on social media than in our offline lives. That is why it feels unpleasant - because it is the most hateful "others" we meet.
I would love to read the research he's referring to, but I couldn't find any links or citations. I intuitively believe that there's truth to all of these so called myths, but I'd love to be proven wrong. Anyone know where I can read more about this?
I would, too, because I've seen research that says otherwise. I've talked with this researcher[1] about it in the past, not sure if their page has the study they were conducting online yet, but the page has papers that are relevant/adjacent to this topic.
My initial concern, based on reading the abstract, is that since this study only looks at Twitter and not Facebook, it isn’t enough evidence to make any conclusions about echo chambers in a general way as so many people get their news from Facebook.
Facebook tends towards people you know in real life, so it’d be closer to the echo chamber that is your immediate social circle, but possibly a bit more open because it doesn’t concentrate your attention as much as you probably do offline, i. e. 50 % with spouse and only seeing your high school buddy every two years.
I guess a lot depends on your job: if you work in healthcare or retail, you‘lo have pretty diverse interactions. Software developer in their home office? Not so much.
Reddit popular is a total echo chamber. Almost every sub, outside of focused hobby subs, turn into echo chambers eventually.
The dynamics of reddit, with downvotes totally squelching dissenting opinions and upvotes promoting popular ones, is naturally going to turn into a total echo chamber.
If you haven't touched Reddit in a while - especially if you've been involved in real conversations/work on the same topics in the meantime - it's a bizarre experience to visit various subreddits and see the particular subcultures which have developed. It tends to be very narrow, uninteresting, and annoying. I think most any monolithic community is going to be more or less like this.
It definitely includes hobby subs too. Like, you may find everyone reciting "never do X", even though it's actually correct to do X sometimes, and that's just an oversimplified rule for beginners.
It's why I keep coming back to Twitter, because it only gives me exactly the people I choose to follow, even if I sometimes have to fight the app to keep that default chronological view with no garbage added.
> I intuitively believe that there's truth to all of these so called myths, but I'd love to be proven wrong.
His opinions are based on claimed research while yours are based on intuition. I’d sooner trust his opionions compared to yours because he at least could be exposed as a liar if it turns out that the research doesn’t back up his opinions. You on the other hand would at worst be called naive.
I think the myth is the misinterpretation of scale. People tend to believe social media is primarily hate and misinformation, that most users are simply addicted to it like dope fiends, and that it's nothing but echo chambers, etc. All of these things are true, but likely to a lesser degree than assumed.
Agree, the negative part is disproportionately impactful.
As an analogy, my g/f and I recently went grocery shopping. As we got out she was fuming at how old people completely ignore COVID rules, whilst society is largely making sacrifices for them.
I was part of the whole experience, and two people ignored the rules. One did not have a mask, another did have one but ignored social distancing.
There were probably some 200 people in the store. So actual reality is that almost everybody complied. But it takes just one or two that don't, to make a sweeping conclusion like that.
Exactly my thoughts. OP needs to read _Thinking, Fast and Slow_ and/or _The Scout Mindset_. Actually COVID as a whole has been yet another revealing of the amazing ability of the irrationality of humans and our failure to understand statistics, numbers, and growth rates, large and small.
No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline
That conclusion seems completely wrong to me. Sure, there are previous frustrations, but when you feel attacked online, you become hateful as a defense mechanism… and of course, there is always some YouTube channel ready to reinforce that anger.
And this happens with almost every social media since freenode
When I feel attacked online I stop going to the places where I don’t feel welcome. As you say, some people react by engaging and escalating, but I don’t think most people stick around long in those situations. What this guy is saying seems to be that the people who escalate are likely people who already behave that way offline.
My experience is that people are more socially refined when offline and that anti-social behavior is there but gets corrected and contained. Offline experiences can escalate much more but at tge same time there seems to be more conflict resolution. Everything is more intense.
But it’s refreshing to read about a different view on the topic.
My opinion is that it's rooted in the anonymity people feel on the 'Net. There are no, if little consequences to being a jerk online---not nearly like in "real life".
The other problem I see is that a person says what they feel in the moment---and then they move on. But the (emotionally charged) comment from that moment lives on, potentially for years. The "energy" doesn't dissipate with time like it does in "real life".
It’s not anonymity but the distance that matters. Plenty of people on Facebook with their real names, locations (through photos and events) don’t seem to care about saying things that they may not say face to face.
I think there are consequences, and I think the problem is jerks favor the punch in the gut over lukewarm ignorance they receive in return for little honest contributions. It's probably a natural tendency of a "superflat" social environment that hateful low-effort remarks pays off better.
A very recent extreme example from Ireland: Just last week, there was a horrific murder of a woman out running. Ashling Murphy, a teacher.
The next morning, print and radio media ran the leaked news that a Romanian national with many priors had been picked up by local Irish Gardai (cops).
For a day or two, racism and xenophobia ran rampant - then sure enough, he was found not suspicious and released. The man's life is ruined now.
IMMEDIATELY afterward, print and radio media blamed social media for jumping to conclusions, doxxing the man and ruining his life (social media were actually very good at removing his name). Old media are now calling for regulation of social media and blaming them for the whole thing, projecting and deflecting with impunity.
...
There are countless other examples - Joan Burton in Jobstown, Irish Water protests, Maurice McCabe - every time the government or Gardai or media are caught in a major lie thanks to social media, there are a couple weeks of stories about online bullying and the need to regulate social media.
It's so charmingly obvious, and the lie is blatant, but it works on enough people that they get away with it. The cost to society is extreme, from enabling corruption to slowing progress to dulling and poisoning the minds of the gullible.
Among other reasons, just consider that media suffered as much, financially, from monster.com and eBay, which devastated classifieds. For many local papers, that was the single largest change in the last two decades.
And I don’t remember any campaigns against those companies.
The individual journalist and editor actually writing and deciding on a story also doesn’t hold the grudges you assign to all of “media”. They don’t have the power to affect any change that could ever make the sort of difference needed to come back to them in any meaningful way. And they are unlikely to experience the strong emotions “media’ might have, because they haven’t experienced much of the loss that the industry has: they still have a job, for example. And by now, they are likely too young to know better times pre-internet.
>The individual journalist and editor actually writing and deciding on a story also doesn’t hold the grudges you assign to all of “media”.
That assertion doesn't hold up when you read the personal Twitter feeds of many journalists and editors of the publications pushing the anti-tech narrative.
2. FB promotes content with the fastest growth of attention: upvotes, downvotes, comments. It's the opposite of HN, basically. The most engaging content doesnt have to be divisive, but since the majority are emotional beings, and have little interest in abstract thoughts, they react better to emotions. Maybe in a few centuries, the majority will be more concerned with thoughts, and less with emotions, and the equivalent of a flame war will be snarky scientists exchanging with obscure arguments about correctness of some irrelevant theorem, e.g. NP-deniers would reject the NP=P equality, NP-protagonists would support it, and the moderate minority would advocate for a middle ground - that neither statement is provable. Those moderates will be dismissed as Goedel-ists.
3. FB is like a big open club that admits anyone and everyone, and all events take place in the only hall room. Predictably, it turns into a shouting match and FB has to ban some members, and by banning some and not others ideas, the club necesserarily turns into an echo chamber.
The echo chamber isn't due to bans. It has to do with Facebooks algorithms featuring you news stories you think you will relate to.
Besides, I have yet to see anyone banned from any social media site whose absence makes the site worse. In fact, I blame the various social sites for not banning them sooner.
> Myth 2: Social media makes people hateful
> No, research suggests that online hate reflects offline frustrations that make them hateful both online & offline.
That's just a sleight of hand. Sure, people may have been hateful before, but social media amplifies that hate exponentially.
Social media thrives on outrage, being addictive, and uses algorithmic ranking to feed it as much as possible.
> The hateful are few in numbers but they are attracted to politics and, hence, are much more visible.
Few in numbers? Have you seen what goes on during election years?
Who hasn't been talking about the January 6th events?
I'm not sure what there is to 'intellectually debate' about it, but I saw a decent amount of relatively sober discussion at the time. Did you not?
People who still talk about it this far after the 24 hr new cycle has moved on probably have strong emotions around it, so it makes sense that recent conversation is more emotionally charged.
I agree with a lot of that. In my opinion, the problem is the news feed. My very "hot take" is that social media does not only NOT create echo chambers, it's actually the opposite problem.
People naturally self-organize according to common interests and values. Prior to "modern social media", forums were popular and each forum was special-interest and broken into sub-forums. There was usually the obligatory "Off Topic" forum where anything goes (and you'd still have organized threads) and often the "Only Religious" thread and "Only Politics" thread. People would venture into those threads knowing exactly what to expect, and they could bail out at any time. Some of those threads got heated, but I don't remember people saying Forums were causing a mental health crises or that people were being censored because mods deleted off-topic or obscene content.
Social media like Twatter and Facebook introduced the "news feed" where everything and anything gets shoved down your throat and you have no ability to filter by content or topic. The best you can do is follow / un-follow or mute certain pages or friends. I don't know anyone who doesn't want a chronological sort option, but "the algorithm" wants to push the most engaged with content to the topic. And we all know that the most engaged with content tends to be the most incendiary and provoking.
Don't get me wrong, exposing yourself to "uncomfortable" information and challenging your beliefs is healthy and everyone should do it from time to time for self-growth. But the vast majority of people who go on Twatter of Facebook are looking for entertainment, not rational discourse. They log on after a hard day at work looking to unwind with cute cat gifs and instead get their crazy uncle's political rantings shoved down their throats or, far worse, some clickbait news headline that makes everyone of all political stripes want to click because it's so stupid and provocative.
It's no wonder people seem more angry on social media.
Unfortunately, his solutions are non-starters. Openness and oversight of data and algorithms won't do anything without clear ethical requirements. All it will really do is slow down iteration. Prioritizing tools to shield against of hateful content is not in the best interest of social media platforms. Sharing is the main mechanism of retention.
As for the offline stuff...we've been trying to do that since before the internet. See how far that's gotten us.
If I had any solution to offer, I'd say it's to use large platform social media to funnel people to smaller communities where most of the real engagement happens (like Discord). We're already seeing it with brands that don't feel like they can reach their whole audience anymore. Politicians can do that too.
> Prioritizing tools to shield against of hateful content is not in the best interest of social media platforms
It's well understood that hateful content is harmful to retention. Off the top of my head I can think of at least 3 long term projects at IG that were premised on this.
All major social networks prioritize tools to reduce hate because they know that consuming and sharing of hateful content is bad for their bottom line.
The problem is that people are hateful and they express hate online. It's no longer the case that social networks are amplifying this pattern in any way.
> It's well understood that hateful content is harmful to retention
Not sure how you are defining harmful here, but emotionally charged content designed to upset and anger leads to higher engagement numbers. I would argue that's quite harmful.
If you're talking about Facebook or Instagram, users actually have a lot of control! You can snooze or completely remove other users, groups, and pages. You can ask for "less content like this" and FB used ML to show you less of similar things. You can add keyword filters to the comments sections under your posts. You can block users from commenting or from posting in your group. You can even do all of these things with ads as well, including removing ads from certain categories.
There aren't power user controls like custom regex filters, but that's because the vast majority of users would not use them, and it's not worth the UI complexity and risk that things like that add.
Instagram used to be the perfect social network. You only saw what you followed and there were clear anti-addiction mechanisms: “you’re all finished” and then historical posts if you were at the end; stories being not lit up if you view them; stuff like that.
Now at the bottom of my feed is random high engagement stuff. Yeah, dude, I looove photos of space and shit. But I also scroll loop on them. That’s why I trusted Instagram.
Now I’ve noticed my behavior is so weird. I go on Instagram and I have a resistance to scroll down because I know there’s scroll bait down there.
Don’t even mind ads. Just scroll bait. I think we both know that was an intentional choice to add that at the end.
> You can snooze or completely remove other users, groups, and pages.
This works until FB/IG/Twitter starts "recommending" more content to your feed to keep you on the platform longer. As far as I know, there's no way to disable this.
I posted this because of this interview Michael Bang Petersen did last October (there's a transcript but you have to click to see it, then scroll down to where Petersen appears):
The noosphere is so young, so recently emerged. That a couple early aggressive/exploitative/disruptive signals would dominate & run amock is unsurprising. That we mistake the whole enterprise as maligned while it is only so few causing chaos is unsurprising but tragic.
I feel like we are all demanding too much, insisting on certainty & comfort. We both need to let these companies figure their own paths out- allow the diversity of their approaches- while also starting to let users band together & do their own moderation, create their own socialized defenses & overlays. I dont want to tell these companies what to do, how to handle these problems, but right now most of their terms of service prevent users & others from mounting any kind of their own defense. Ultimately the only people I trust to tell us who the bad users are are other users, and we're not all going to agree. I think that's ok, and that we should embrace sovereignty: we should let more democratic form of social-media-ing emerge.
This researcher has such a better problem redefinition than our simple fears project. I'd love to have more hope that the world could engage the real problems, could avoid the convenient frustrated blame-games & bully-pulpit regulation the drums of conflict & tension beat for.
For me this is everyone else figuring out why the web 1.0 was so great. If you have your own place you have to make an effort. People might still visit but they will not return if there is nothing interesting going on.
Platforms in stead use a system of guaranteed readership. Effortless publications that look like they belong because they are consistent with the rest of the garbage heap. (Advertisement is the biggest turd. If there are ads there is even incentive to keep the content crappy enough.)
Its like being invited for dinner by a friend vs a soup kitchen. Why is it not the same? Why are the people at the soup kitchen less friendly? -- are you kidding me?
Moderation has to be the most obvious advantage. If you bring the hate you wont be invited for the next dinner.
It feeds back into it self: People who don't run websites just for fun fail to understand or appreciate what it takes which makes their judgement unreasonable. They desire others to live up to standards they themselves do not meet.
A professor publishing on twitter?
> Exposure to hate can help legitimize hate, in part because our views of the other political groups becomes biased.
This was the whole point! Expose one camp to the most stupid argument made by the other camp for ENGAGEMENT!
On your own website you wouldn't just quote the most stupid things you've found on the internet.
At this link there is the article where M.B. Petersen (and collaborator) gives a more precise notion of the question behind Myth 2 ("Does social media make people hateful?")
https://psyarxiv.com/hwb83/
I haven't formed an opinion yet, but I'd be curious to hear from anybody who has one. (see among other things how they test for a "hostility gap", page 16).
I haven’t been on social media for years but maintain a minimal LinkedIn for professional purposes. To me, it seems a great choice I made years ago. There’s too much life to live and not enough time to let randos on the Internet into your time. Increasingly, I see people unable to disconnect from social media. Whatever happens there seems immediately wired into their brain (and by their choice to some extent).
Edit-If something makes you feel consistently bad or worse about yourself then tuning out or disconnecting from it is a sane choice. Even murderers can lead meaningful lives behind bars. (Dostoevsky was good for this insight)
I'd say the fundamental problem with social media is that current incentives strongly encourage companies to make bad social networks.
Cutting advertising and mass spying out of the picture could be enough to all but completely solve the problem. But there may be other ways to fix it. Maybe significantly increasing the liability and risk platforms are exposed to for widely-posted/shared content.
Rather than try to corner company's into regulating & providing speech through checks & balances on their profitability, I'd really like to see online speech be something that users are broadly capable of moderating independently. Most moderation & curation services & systems should be opt in & separate. And ideally should function across sites, allow us to comment & reveal problematic users that work across sites (or to ignore those who are un-invested, low quality sock puppets).
Trying to change what these companys are, what they do: it seems like a truly sisyphean struggle. I don't see any hope of coercing them into becoming better. To me, the onus to be a responsible, healthy, positive society lies on the members of society; we lack the technical starting place to begin to experience these platforms in our own manner, lack the ability to start to self govern. But the conventional attitude right now seems to be that heightened centralization, amplified stronger louder more-active regulation & clamping down, that, as you propose, bigger sticks is the only win. To me, the bigger stick option is mad, is rampant destruction; we cannot place all the responsibilities of society upon an entity. Society has to have it's own stake in here, we can't just pass the buck & demand someone else fix the social quagmires.
But society must be given a chance to defend itself. Something the currently freedomless, constrained, walled gardens quite explicitly forbid.
I suspect advertising might actually sometimes be a moderating effect in that sites/networks occasionally clean up their act to retain major advertising partners. Not great on the whole, but might be one meagre positive.
I will admit that it is impressive to compress this message into several tweets. That said, I have a fundamental problem with the message presented. Biggest one of all is buying into 'hate' speech primarily due to how loosely it is being defined and how eagerly various governments jump on it to curb remnants of free speech. The fact that this is presented to the public and law makers suggests to me that:
1. Author knows his audience and wants to present issues in the language they understand
2. Author believes it
Not even a mention of the issue of bot networks? Is that because they're seen as a very real, if unmentioned, problem? Or is it a pretense that they do not exist and have no impact?
What is the lens here? If it's Danish lens, then I'm concerned it doesn't fully capture what's happening in other markets.
> Myth 1: A lot of misinfo on social media... No, research suggests there is little, shared by few & having small effects.
The U.S. anti-vax group is quite vocal and quite large. At times this includes elected officials, but sometimes elected officials simply resist mandates which emboldens vaccination resistant groups.
> The U.S. anti-vax group is quite vocal and quite large.
Yes, here [1] is an example of a pro-vax comment, made by a flaired user, removed from r/conservative. The user was probably not notified of the removal, and the comment appears to its author as if it was not removed. To me, that is very misleading. I consider such removals a form of misinformation; other users only see a version of the discussion that is wholly anti-vax. Also, since that comment received 400+ upvotes, you can imagine many of those voters might have left the same comment if they hadn't found that one to upvote.
When a moderator removes a highly upvoted comment, they're not just removing one author's point of view, they're removing the opinions of all the people who voted for it.
Great mirror to the comment one thread up selling /r/conservative as an example of most reasonable right-leaning people jsut discussing and getting unfairly hated by the rest of Reddit.
Upvoters cannot see removed comments. You can try it here [1] yourself to see the effect.
It's also possible for moderators to add a username to a subreddit's automod config to silently remove all their comments, and that would be a "shadow ban" from that subreddit.
> 1) Long-terms solutions to online hate requires solving the causes of offline frustrations. No quick fixes exist.
Can't wait to see what their plans is for that. I don't think any can be made, the space will always attract frustrated people and they will fill the void no matter how few there are.
I disagree harshly with the idea that social media _isn't_ an echo chamber. I mean, look at Reddit over the past few years. They've slowly (and successfully) removed nearly every conservative leaning subreddit on the site. It's clearly a hive mide ultra-lib circle jerk and anyone who doesn't see that has been drinking the blue reddit koolaide too long.
The only thing left really is r/conservative which is at best made fun of on the rest of the site - and even they have been debating going private.
Another annoying example: its frequently "fine" for liberals to say stuff like "I hope those who didn't get vaccinated die", which I've seen extremely frequently in things like r/politics and others, but when its the reverse (r/nonewnormal) it causes an entire ban of subreddit?
Not that I condone either side of these ignorant comments, but if you're going to take stuff down, at least be fair to both sides about it.
But hell, even centrists critiques like these are made fun of nowadays - gotta be a part of that extreme echo chamber to be cool, right?! So who am I kidding...
You don't get to embrace bigotry and hatred, and then turn around and complain when you get kicked out. If the conservative subreddits could have managed to behave themselves like rational and good people, they would not be getting kicked out.
That's certainly an unhealthy, unhelpful and hateful attitude. I'm not sure if "bigoted" is the right word here, but that comes down to etymology.
On the other hand when there's a disease that's killed millions worldwide and someone decides to allow themselves to become a vector and let the disease spread rather than get vaxed, I'm not going to shed too many tears if they happen to die from it.
Hoping or praying for somebody's death is ugly, but some schadenfreude may be appropriate.
Answer as a liberal: yes, it is bigoted and hateful.
You'll see people from both sides doing all sorts of pirouettes to justify why when the others do it it's bad, but when they do something similar it's justified, or they'll point out to other stuff to deflect the criticism. In any sufficiently large group of people you'll always find people who are not good at admitting mistakes or doing any self reflection, that goes for conservatives, liberals, socialists, fascists... You can argue it's to different extents, but all have plenty of it.
My experience with social media is that it tends to amplify the stuff that's most reactionary, causing it to spread until the fastest growing one takes over, kind of like a mould on stale bread. The analogy kind of works for new websites too, as it seems new social media is often more enjoyable and it takes some time for toxicity to start seeping in.
Not taking a vaccine and endangering yourself and others as a result is a personal choice, so I would not call it "bigoted" to be angry at someone for making that choice.
"Hateful", maybe. But again, it is personal choice that person made that endangers others. It is also not an active stance - no "liberal" is going out to actively make unvaccinated people die. They are just sitting there hoping for their choices to have extreme consequences.
And the complaints from conservatives about this are definitely not in good faith. They are just an attempt at a gotcha and turning tables. They themselves do the same kind of thing every time, and do not see a problem with it when it is their own side doing it.
I have encountered many close-minded liberal opinions.
Think about this: we have seen ideas on the right that go too far (e.g., storming the capitol, white nationalism, racism or other forms of hateful discrimination). But there are ideas on the left that go too far also.
Who is keeping check on the other end? Can liberal or left-leaning ideas go to far? When they do, are they being criticized publicly or excuses/ignored? If that’s not happening then bigotry will become rampant.
There are no widely accepted ideas on the left that go anywhere near as "too far" as there are on the right. There simply is no comparison there.
There certainly could be such ideas. But right now, as things stand? No, there aren't anything like the absolute madness the right is displaying. Nowhere near.
Perhaps the reason you think that is because of the previously discussed biases that have became normal.
I’ll give two examples:
1. The Black Lives Matter movement. Riots and cities being burned because of a few cases of police brutality were covered in a very sympathetic rather than objective way. In one case the words “mostly peaceful protest” was used to describe one where several things were on fire.
2. Transgender issues. Most ideas about it are never questioned. Parents are allowing their small children to decide what gender they want to be. People are having surgeries and hormone treatments with seemingly no other alternatives ever being explored. Biological men are competing in women’s sports and destroying old records. People are switching bathrooms and everyone must just put up with it. No one is questioning wether we are actually taking the healthiest long-term approach. Why? Because they are afraid to question it.
1. The Black Lives Matter protests were, in fact, mostly peaceful. I didn't attend any of the protests but there were several in my city. No broken windows, no fires, no riots. "Peaceful" objectively describes the majority of the protests. Much like most police officers are well meaning individuals who became police because they want to help people, most of the protesters were law abiding folks who want to make America a better place.
2. You have to have some pretty narrow blinders on to think that no one is questioning transgender issues.
You are making some extreme misrepresentations in this comment, like "People are having surgeries and hormone treatments with seemingly no other alternatives ever being explored."
This is very, very, very far from what reality is. This is what you have been told by people pushing far-right ideology. It is not reality.
I know I could simply upvote this but I'm replying to this anyway to say that you're 100% right here. hoo boy that comment got to me and I'm honestly upset that it didn't get downvoted.
You could also say that reddit represents a far wider spectrum of views than normal discourse, just because of stuff like r/childfree or r/witchesagainstpatriarchy. It's all liberal-compatible, but it's still more diverse than normal media, where you have centrists, and the occasional off-center guest, but all exist within an essentially narrow section of possible opinions.
> They've slowly (and successfully) removed nearly every conservative leaning subreddit on the site.
If the content that came from The_Donald is now what half the US considers to be "conservative leaning", we're in big trouble. White nationalism, anti-vax, pro-authoritarian, pro-civil war content. This isn't a debate about tax policy anymore, it's gotten more visceral.
It's gone to a new level where some people just want to appear as not being in the echo chamber, so they label themselves uniquely, but when you listen to them, they are just part of the larger groupthink.
For example I have started seeing many people online calling themselves "libertarians," but they are still in favor of big government ideas such as UBI and universal healthcare. Or "libertarians" who want a white ethnostate enforced by the police
I think we do not have problem with different opinions, but instead we have problem of dogma's or in other words side1 does not talk to side2, as "open dialogue" with "stand open to be corrected" is gone.
News (especially MSM) is not doing it's job but instead doing psy-ops and further polarizing the society, while "alternative/poor/amateur media" (like Joe Rogan, etc,..) are "filling the gap" where for example Sanjay Gupta came from the other side and that was good, but we'd need this to happen a lot more ~100x, so then we would not need to reach to any totalitarian censorship which is further amplifying the problem and solving almost nothing (i.e. fact checkers are with very limited value, often biased and partial incomplete)
This is to Danish parliament. I wonder if the research he is quoting applies to (a) a larger country (b) with a heterogeneous population that (c) communicates primarily in English (where most misinformation seems to be aimed). Since he didn't link to the studies, it's hard to know. Giving him the benefit of the doubt that these studies really exist. But I certainly think those three factors may be important in his report not generalizing.
> This is to Danish parliament. I wonder if the research he is quoting applies to (a) a larger country (b) with a heterogeneous population that (c) communicates primarily in English (where most misinformation seems to be aimed).
I was thinking primarily about the US with questions about the UK as well. However, I think India is a reasonable country to also consider. Doesn't India also have the unique challenge of many groups who speak their own non-English non-Hindi language serving as misinformation subculture?
I've got severe problems with this posting on multiple levels.
First, the Twitter thread is, as Twitter threads are, entirely insufficient to express nuance or comprehensiveness. The linked slide deck (in Danish) is similarly quite short on details. Its references are also largely unsatisfactory.
The whole presentation reads as if it's by someone who derived their conclusions, then plotted their research. Phrasing and framing ignores obvious evidence of real-world problems. If Petersen were a doctor, he'd be the type to whom a patient clearly suffering, or dying from, some unknown malady might present, to which Petersen would run a standard battery of tests, all inconclusive, and declare: "There's nothing wrong with you, all the lab results are negative."
If there are real-world problems --- disinformation dominatiting truth, spreading and persisting far more readily, political and genocidal disruptions occurring around the world, mental health epidemics coinciding with the adoption of social media amongst populations, and similar such issues --- then at best we can show that the specific research was either focused on the wrong elements, or performed inadequately.
The fact that changes in media profoundly change societies is well established, and dates back within historical records to the emergence of writing and mathematics. It all but certainly extends to narrative and language as well. Printing, literacy, mass-media, advertising-funded serials, telegraph and telephone, radio, mass recorded media, television, cable distribution, Internet, and mobile algorithmic social media have all left their imprints, quite often in war and genocide.
See Marshall McLuhan, The Gutenberg Galaxy and Elizabeth Eisenstein The Printing Press as an Agent of Change, for an overview of that thesis.
How, and why, Petersern comes to his conclusions concerns me greatly.
Is this specific to Denmark or applicable more widely? Where was this research conducted?
Who was analyzed in this research and through what medium? Social Networks are more and more polarized. Facebook is now dominated by the older generation, and more right-leaning. Tik Tok is young, and left-leaning. Who was interviewed for these studies?
I'd also like to add a Myth 4: Misinformation is generated agnostically from political tilt. It's not. Neither is the intellectual humility mentioned in Myth 1.
Studies show that some policy opinions are held by their proponents regardless of whether their political party is in charge, or not. Other policy opinions are flip-flopped by their proponents depending on who's in charge (ie. for when their party is doing it; against when someone else is)
Any discussion or battle of misinformation with a tilt to myths and facts needs to be aware of this reality to be effective.
This drives me nuts. The irony here is that one of the things social media does is force us to simplify our discourse to the point of infantilization. Subtly does not get traction.
We need binary truths! Myths and myth-busters.
None of the author's "myths" are myths. They are all true in a general sense. What the author is correctly pointing out is that there is more subtly. That to understand the real situation requires a deeper look.
Of course, that would never get traction on social media, so it must be framed as "No! They are wrong!"
If I only get 10min, I’m still including citations in my 10min presentation (which they claim to have done on the last Tweet in the thread linking to a Dropbox with slides and citations).
Last thing I want is to come across as someone who didn’t do their research.
The irony is that this comment is less informative than the thing that it is criticizing. Basic summary: the truth lies somewhere in the middle. Which is as informative as claiming that the truth is on some kind of binary extreme.
A further irony is the meta-complaining about simplification while being as bare-bones simplistic as it gets. “No! They are wrong!” Really? That’s not what the Twitter thread says at all. For example, it claims that misinformation is not a problem but that the real problem is biased sharing.
The Twitter thread makes a series of concrete claims about social media which can be fact checked. The comment above, however, does not. 4/10.
Seems he should have been given more time. These myths seem lukewarm. What about the facebook files from just a few months ago [1][2]. It's hard to get a sane feed these days, they force their feed algorithm down our throats.
It's incredibly weird to me that people are ok with demonizing feelings and tone. Who cares if people are 'hateful'? One man's hate is another slight aggresion, and american toxic positivity makes haters of us all even if we speak with the best of intentions.
I didn't say anything about toxicity, I don't know what that is and haven't found a clear definition. Hatred is a completely appropriate reaction to the percieved stupidity of others, its a human emotion like any other it can be expressed as vulgarities, capslock etc. but the expression isn't the problem here. You can clearly define vulgarity, or other types of unwanted behavior but this isn't enough for the self appointed tone police. We are not anything 'as a society', we are not a society, we are not even remotely in the same culture. We as different peoples that have to interact and if you make criteria based on the arbitray standards of civility based on your culture this is terrible for a number of different reasons.
I totaly don't agree with this statement: "Hatred is a completely appropriate reaction to the percieved stupidity of others".
Especially "percieved stupidity", if someone thinks he is smart, he should be smart enough to stop and think that maybe other person is right or try to understand why such person might think differently.
I understand mostly it is lack of time, I don't have time to understand point of view of each and every person on the internet, but still hatred is not a valid response until there is a really good reason.
Self appointed tone police have nothing to do with it - it is that as in the Twitter thread - normal people will withdraw from the discussion and person who is discussing with hatred will talk only with people like himself or will be left alone. Which leads to less people participating in discussion which leads to fruitless dung throwing.
So I see that there is no "self appointed tone police" it is just that people stop contributing when they face 'hatred' which leads to conclusion that - yes we are society and this is how it works.
You have trouble understanding what I'm writing. Hate is just an emotion, you can't ban an emotion you can ban some bad expressions of an emotion (threats of violence for example) but to try and make one of the basic human emotions illegal online is idiotic.
I see you don't understand what I am writing as well.
I don't write anything about banning or making emotions illegal.
It is just that I can't see how someone can get angry about something someone wrote somewhere on the internet.
Other thing I am writing about is that expressing anger or negative emotions online is not productive and is scaring away people who could contribute to the discussion.
My OP about demonizing hatred is about baning 'toxicity', 'hate' etc. that serves only as a rationalization of violence or exclusion. There is no rule we have to be productive all the time, but hatred can be a great motivator, there are books, games and music that grew out of some very intense hatred.
>It is just that I can't see how someone can get angry about something someone wrote somewhere on the internet.
Can you see how someone gets angry about something off the internet? It's exactly the same except some non-verbal actions are not possible for expressing that anger.
>No, research shows that, for most, social media breaks the bubble. We are more connected to "the others" on social media than in our offline lives.
That just means that social media exposes people to more ideas. Not that it's not an echo chamber. On social media, no matter how dumb your idea is, you can always find someone who agrees with you. It doesn't matter if even more people disagree with you because you found the validation you were hoping for.
This thread has several issues, with one of them being that author doesn’t call out tech companies whose algorithms are designed to amplify echo chambers as well as hate. But instead he goes on to claim that social media doesn’t create echo chambers.
He has some valid points on the causes, but this is poorly and seemingly very narrowly researched or narrated.
The "echo chambers are a myth" statement seems a bit disingenuous to me, and misses the increased algorithmic exposure to hate in social feeds that we otherwise would not see offline.
A lot of research has been done on this. For most people their IRL social circle is a much smaller and stronger bubble than what their online presence is exposed to. Even a classic “echo chamber” online is likely to expose most people to greater diversity of thought than their offline life does.
For more reading on this I suggest “Breaking the Social Media Prism”
Also note: research shows anon accounts tend to result in more nuanced and less partisan discussion.
IRL social circles are also, for the most part, geographically constrained to those one meets in everyday life. Possibly geographically distributed for professionals amongst co-workers or contacts within ones career track.
Online echo chambers, for better or worse, collect those of a similar interest and/or ideology from around the world. As with, say, Hacker News.
That may be benign, or it may not be. The inspiration and action are what matter. But the end result is that groups that would have no capacity to form and organise through physical contact, and which would be only very weakly integrated through print or broadcast medium, can emerge online. Again, that might be a positive or a negative development.
But what it isn't is either equivalent to IRL associations or a continuation of previous patterns of organisation.
And to that extent, Petersen's claims are grossly disingenuous.
If you continue this experiment, maybe they algorithms will catch on and the recommendations will be tailored to program a split personality into you, so instead of losing influence on one customer, both partisan sides of yourself will become two useful, valuable customers.
I'm glad to see more of this conversation happening. I made reveddit [1] to provide oversight for secret removals which happen more often than you may think. For example, [2] is an upvoted comment from a mod-verified contributor that was removed, likely without notifying its author.
I wish reddit would indicate to the author when their comment has been removed by a moderator. I don't know if that will ever happen. It seems duplicitous to me to present your content differently to you than to everyone else (see the image at the top of [1]).
As a dramanaut and troll I salute you good sir. May your internet interactions be only pleasant, may your desk fan blow cool in the summer, and may internet denizens heap praises and blessings upon your name wherever ye go, wherever ye be signed in and whenever ye post.
Thanks. Ironically I think of it as an anti-drama tool. For me, the drama is that removals are very often hidden from their authors, and revealing that puts people back on the same page.
I appreciate your desire to point to the research. Many if not most of the comments in this thread are rejecting it out of hand without even looking at it. That's disappointing and, given the topic, ironic.
However, breaking the HN guidelines against shallow dismissals, snark, and sneering is definitely not a good way to make this point. Besides adding poison to the ecosystem, posts like this have no persuasive power—in fact they have negative persuasive power: they reinforce the very thing you're complaining about.
A better way would be to take a look at the actual research and bring in some interesting information that you find there. If you don't want to do that or don't have time to do it, that's fine, but please don't make empty spiteful posts.
"Research shows" needs citations. I get that they only had ten minutes to talk, but if you are going to post a long thread on twitter, adding sources would seem like a completely reasonable thing to do. Otherwise, this thread itself could be misinformation - we have no way to verify.
It's true that people are exposed to other views, but there is still the echo effect of what their bubble thinks of those other views. Two people discussing their differences is not the same as two (or N) bubbles hurling outrage at one another, which I think is closer to most of twitter - the echo chamber still applies within your bubble and its criticism of others