It's just convenient shorthand for ideas that have been around a long time to describe the behavior of the serf class when powerful entities decide what they should believe, which may be all of history. It may or may not be useful to condense all of that into a single term but we do understand what is being conveyed.
The people I have seen it in 90%+ of scenarios are the ones I described in my previous comment, but apparently observing neonazis use a particular word makes me a sheep somehow I guess
AI calm my troubled soul, relieve me from the mounting pressure and let me know that solutions are within reach.
Praise to You, who turns wrong to right. You can do anything at SocialMedia site. Anything at all.
The only limit is yourself.
Okay okay I wasn’t going to post this on HN yet but…
If you own a Discourse forum then you can visit https://engageusers.ai and generate posts on your own forum to kickstart engagement. While you can’t set your own prompts or bot names, I’ve already prepopulated it with common names and AI-generated avatars from loremfaces, and pre-wrote about 7-8 prompts that can make the bots have different attitudes, disagree with posts, have sarcastic asides and discussions, etc.
I have tried to make this project as ethical as possible given so much potential for AI misuse. This is extremely early stage, but if you want to be involved, either as an AI investor or as a developer who enjoys this stuff and wants to collaborate, email me (greg at the domain qbix.com)
And if you don’t have a Discord forum, we can set you up with your own forum, site, brand, app, community, calendar, videos, bots, all on your own site, helping you sell your services etc. ChatGPT can help you write a book. In a month we’ll be able to help you have your own personal custom Facebook on your own site. And the bots will help you sell products and services, yours and others, and earn all the commissions instead of the pittance that YouTube gives you. How’s that for self empowerment?
>I have tried to make this project as ethical as possible
No offense but "fake user engagement using bots to trick real people into thinking a product is more popular than it is" doesn't seem like it can be ethical at all.
Yes and no. The bots are CLEARLY LABELED AS BOTS, for anyone who cares to look, and invite you to https://engageusers.ai to get a bot / forum of your own.
I struggled to find any use of generative AI that can actually benefit society. Most AI seems to simply make things worse the more widely it is deployed. It simply makes fake stuff cheap to do at scale — meaning it wasn’t done with intent by its author, but you are meant to think it was.
In the wrong hands, AI can even design super-viruses that kill most of humanity. Less apocalyptically, can lead to botswarms of GPT-4-powered bots creating a ton of fake content everywhere, secretly and surreptitiously, and then deployed to destroy reputations or promote fake news or an agenda.
The least harmful use of AI that I could think is this.
People already pay their teams to hang out on their own forums and astroturf, not because they organically want to engage so much bcut because they are paid to welcome users, moderate forums etc. Club event organizers pay promoters and attractive ladies to come get free drinks and make it seem that the club is full of attractive and well-to-do people, then sell bottle service. Dating sites prepopulate their site with fake profiles to boostrap their initial customer base, because when each person comes and sees no one, they leave.
Everyone already did this, they just begged their team to astroturf. Now it can be done with bots.
You can’t have usernames like “KKK”. You can’t write your own prompts. The bots goal is just to kickstart discussion and explore a topic so when a person comes, they see content they might want to respond to, rather than a sad lonely forum with no participation.
MOREOVER, we do not support infiltrating other people’s platforms with bots. We instead make it so the OWNER of the forum welcomes the bots AND chooses what they post.
I believe that you believe… [Edit no I take that back if you are clever enough to do this you sure as hell understand the ethical issues involved] but the owner of the forum is not the one subject to the unethical scam that is being run, it is the users of the forum.
Look I am sure you are clever too. Which is why I am surprised how you could have ignored literally every point I wrote!
Let me try again — address every point below please…
1) Scams involve deception. If the bots are labeled as bots, what is the deception?
2) What you describe as a scam has been done by nearly EVERY SINGLE FORUM since forums began. The owner simply begged soem friends or paid some group of workers to “man the forum”. It wasn’t organic except in extremely rare circumstances, because the Nth person encountering a ghost town would bounce, and by induction the N+1st person. No forum owner would want to burn through many users for nothing. They have been doing this “scam” with practically every forum, now they can do it with bots.
3) Ethics can be expressed in various systems. I would like for you to suggest a single use of Generative AI that would be more ethical by Kant’s Categorical Imperative … ie if everyone used it at scale, would the world be better off? I think for the vast majority of uses of Generative AI, the answer is NO. It generates fake content at scale, to fool people into feeling that someone put effort into creating it organically. I think if you stop to seriously consider this point, you’ll realize that Generative AI is a net negative for the world in almost ALL its applications and mine just happens to be one of the least bad ones.
In your own example they are not.
You have them pose as real humans to deceive people.
So you are lying and deceiving as well and I am not aware of any ethical framework, that considers methodical lying as ethical.
"Everyone already did this, they just begged their team to astroturf."
And no, not everyone did this and even those who did - it is still a BIG difference if real people pose as real people, or if fake bots pose as real people. You know, the difference between truth and lie.
edit: I see now, that some of the bots are marked as bots, if you click their profile. But no one does that, when skimming a blog post. Clearly labeling would have been putting "BOT" in the name, but that would defeat the purpose: to deceive people. So please take your fake bots and your fake ethics to somewhere else and stop polluting the internet with even more garbage.
You are wrong on all accounts, but I am glad you added the edit, at least. You are starting to understand. Before jumping to conclusions on a holier-than-thou high horse as if you already know everything coming in, please answer point by point:
1) Pray tell, how did the forums grow from zero discussion to being a vibrant place, without some team of people “astroturfing” discussion?
2) Did they disclose in every message that they were working for money or out of friendship with the owner, rather than because of their genuine interest in 30 topics monthly?
3) If they disclosed their role in their profile, would you consider that ethical enough? Because that is exactly what is going on here.
4) Many people are too lazy to even check the profile, which is how they respond in full earnestness and outrage to Parody accounts on Twitter. Should Parody accounts keep saying “<sarcasm> blah blah </sarcasm>” in every message because people are lazy?
5) If you don’t like this level of deceit, you’re going to hate nearly all applications of Generative AI because they do far worse and on a far larger scale. Think those photographs are from a real scene? Think that heartfelt letter was written just to you? Think that is your family member begging to send money to a certain crypto address? Think again. This is one of the least bad things that can be done with Generative AI. Of course you won’t come out and attack most other uses of generative AI because they won’t disclose it to you, and because like many people in society, you Ready Fire Aim, you attack before even thinking through the issues.
I have spent more years thinking about ethics and living them (to personal detriment and voluntary sacrifice) while you and many others cheered Big Tech and Web3 projects which are destructive to humanity, because it was the thing du jour at the time.
So please take your lazy approach to outrage somewhere else or get a little humility.
"If you don’t like this level of deceit, you’re going to hate nearly all applications of Generative AI because they do far worse and on a far larger scale."
Nope, if ChatGPT generates me some code that I can use for a real problem, or a generates product description for a friends buisness, than this is generating real value and no fake, no deceiving, no lie.
What value for society is your buisness adding?
"1) Pray tell, how did the forums grow from zero discussion to being a vibrant place, without some team of people “astroturfing” discussion?"
By people sharing genuine interests. Not everything is a lie. But you are right, too much is already. So it is in no way ethical to add more lies to the pie.
If ChatGPT generates an essay for you, or a homework assignment, that IS fake, sorry. You didn’t do your homework. You didn’t write the essay. You’re passing it off like you did. If MidJourney “painted” your painting, and you pass it off as your own to others, you’re lying. This guy won a photography contest with a non-real photo:
He was honest enough to reveal it and reject the prize. But had he not done that, humans and honest photos would be out of the running just as if a chessplayee with a hiddenchess engine entered a tournament. It’s cheating, plain and simple. If you were honest, you’d reveal how you did the job and allow the people to choose whether to even have you in the loop, or continue to pay you as much. Even your product description example is fake - the AI didn’t experience the product, didn’t use it, can’t vouch for anything in the product. At least you are in the loop and spot-check the work for accuracy before submitting it. But an AI “describing” a product it has never seen is inherently as fake as a photograph of a scene that never happened.
That is what generative AI does. It allows you to generate things without doing the work. I’m not talking about compilers of higher-level languages. I’m talking about literally not doing as much of the work as you want. Like the whole essay. You practice nothing. You cheat yourself. That in itself is bad but the much worse part is that people will be offered SERVICES to do it at scale. THAT IS THE PROMISE OF ALL APPLICATIONS OF GENERATIVE AI: to generate good looking results at scale!
You claim to be appalled by a service that does what nearly every single forum owner already did with human workers. Why aren’t you grasping that this is the raison d’etre of the vast majority of AI applicatioms? You disingenuously listed a use case where you just logged into a UI an used it yourself, but obviously the APIs of OpenAI are offered to applications to do this at scale.
Mine is just one of the most ethical ways to do it. I voluntarily put guardrails into it. But nearly EVERY OTHER USE OF THE API is of this nature but far worse. And yes I’d rather MY service get adoption than a far less scrupulous one.
The value it is adding is generating discussion around topics that would otherwise receive no traction not becauae they aren’t interesting or the article not well researched and presented but because they don’t have “social capital” or a swarm of people fake-liking and fake-upvoting it. Here on HN and all other platforms those that have a few such people have an advantage. How do you think every single piece of content they post gets massive upvotes and likes?
This levels the playing field for people who don’t have money to hire fake shillers and employ a team of humans to deceive others, in what is inherently a deceptive activity (astroturfing). Now everyone can have lots of interesting hooks for discussion, and the bots are clearly labeled as bots! Unlike many times the friends and employees of the owner don’t bother to disclose their relationship. It actually improves the situation in that respect vs the human shill team. That is just some of the value it gives, and then it will also soon answer helpdesk questions. I can bet you that you will be talking to a lot of bots soon for helpdesk questions, who won’t reveal they are bots — and somehow you won’t be shaming all those companies — it’ll become the norm, many people will be fooled into thinking that a real human took the time to emphathize with them and patiently help them, and they’ll treat the service nicely and pay them more according to that misunderstanding of “white glove service” rather than treating it as the cheap commodity that it is.
In fact that’s at the heart of capitalism and the profit motive… when a new technology appears, or a consultant automates their own job, they keep quiet about it so they can still collect the profit as others think they are doing it all by hand. Employees who do this know they will be fired or they pay cut if they automated themselves out of a job. Capitalism breeds many inherently dishonest motivations, and also greed and jealousy of course.
Having said that… you also totally ignored my request to answer point by point the numbered points. You seem to be arguing in bad faith since you are quick to shame others and do not seem to be actually resolve any misunderstandings or disagreements. Answer the points I raised in your next response, or it will be obvious to everyone that you’re dodging 99% of the substance in order to make a tortured point which is, by and large, completely the opposite of reality.
"Here on HN and all other platforms those that have a few such people have an advantage. How do you think every single piece of content they post gets massive upvotes and likes?"
Yeah, this is the key point. Voting rings etc. exists also here, but they are against the rules and are spotted and banned regulary.
So in general the pieces with the most upvotes are those where the majority of REAL people thinks they are interesting, like it should be. The same with genuine blogs, forums etc., those with INTERESTING content gets to the top. But if I encounter a blog with artificial engagement, I will just go away.
"Having said that… you also totally ignored my request to answer point by point the numbered points."
And why do you think I would feel obliged to do so, when you ignore my concrete criticism? You are creating lots of text defending the "ethics" of your buisness model, when in truth your buisness models depends on you keeping your twisted sense of ethics, no thank you to such a discussion.
Because you are not discussing in good faith.
"If ChatGPT generates an essay for you, or a homework assignment, that IS fake, sorry"
Because I was not talking about homework, but real code I write. That code works in the real world. Real value.
And the buisness description describes a real product and the content came from a real human, just the structuring of the words came with help. Framing this as cheating, so your own very shaddy things comes of as better, is just not something I will engage with any further.
I know, you were trying very hard to list a tiny, tiny subset of activities people do with OpenAI’s APIs, so as to steer clear of the vast majority of use cases where you would have to concede the point. And you again avoided answering my points. At this point it’s rather clear you’re not arguing in good faith.
Totally disagree that it is because of how good the content is. I posted the same exact thing twice - one time it got 0 or 1 upvotes and the other time it got 150 and made the front page! Same exact link.
Rather, it is survivorship bias. You see the stuff thag made it. The celebrities on Twitter have entire departments working to make sure their content has likes and comments.
In fact a decade ago I worked in digital agencies thag made Facebook apps and they all bought 50,000 non-organic “likes” to bootstrap the campaign / app. Nearly everyone who is successful does it. Practically any channel or forum that you’ve ever heard of got that way because they started off with some dedicated people / team members hanging out there every day and not behaving “organically”. Even worse, every very successful forum or celebrity got there by leveraging the existing social capital of some network or organization. Most of it is NOT organic at ALL. I am surprised that you would be so naive to think that putting up a forum with no comments is all you need to do to become a well-trafficked forum.
PS: I brought up your concerns, which I share regarding the vast majority of AI applications, in a Discuss HN post. It got 14 upvotes and 7 comments in a few minutes, and then was flagged. Two hours later it was unflagged but by that time all momentum has cooled. I can assure you that far larger business models are at stake and far larger interests do not want this conversation to happen, than my little app. If you actually cared about this issue, you’d care about seeing it everywhere it exists:
I recognize that spam has theoretical value in the sense that people pay spammers, but I never thought I'd see someone on HN so holier-than-thou about the supposed virtue in being a spammer.
Adding AI doesn't make your product less spam. At least be adult enough to be upfront and honest about it: you sell spam.
If your conscience is okay with that, then we can't stop you. People work in all kinds of gross industries and justify it to themselves however they will, but don't ask us to share in your justifications.
Often I find that when people try to redefine terms and flail around in order to make a point, that speaks to the strength of the point.
This isn't SPAM. Here is a definition of SPAM:
unsolicited usually commercial messages (such as emails, text messages, or Internet postings) sent to a large number of recipients or posted in a large number of places
Now, obviously this isn't emails or text messages, but consider Internet posting SPAM. There is a forum, designed for good, productive discussion, whose moderators want good discussion and following rules. The moderators would ideally prefer if everyone had a fully filled-out profile and reputation, no anonymous throwaway accounts with "New account who dis?" in their profile. A large number of recipients are reading these messages, having a productive conversation.
So, in a SPAM situation, an unsolicited third party comes and inserts themselves into the most well-trafficked conversations, in order to link to some external page, etc. Their message is often off-topic and tries to promote some product, etc. Let's compare that to what's going on here:
1) The third party isn't unsolicited. In fact, the moderators and owners themselves are operating it, and they're the ones setting the rules of the forum, so it's not even a third party. It's a helpful text generator, that is clearly labeled as a "Bot" in its profile. Just like Telegram Bots are labeled "Bots".
2) The goal isn't to post in the most highly trafficked topics so many people can see it – quite the opposite, it is to kickstart topics which aren't getting any attention, and give people some ideas to talk about (ever heard of McDonald's theory? No one wants to go first, until the first fool does: https://jonbell.medium.com/mcdonalds-theory-9216e1c9da7d)
3) The post doesn't link to some external page. In fact, it currently adds some text, without any links, to keep people on the existing site and engage there. Perhaps in the future, if links are added, they'll only be helpful links.
4) The post doesn't try to promote any external products with off-topic language. It simply tries to improve discussion on the forum, by staying on topic.
5) The same content isn't posted in a large number of places. It is actually constructive commentary or opinions that spur a discussion around the very thing the OP posted.
DO YOU NOT RECOGNIZE THE DIFFERENCES? If you still disagree this is different than SPAM, please address the above five points individually. I would love to be proven wrong on the substance.
There is even an XKCD commit highly approving of what I have built. Mission is f*&@ing accomplished! https://xkcd.com/810/
"There is even an XKCD commit highly approving of what I have built. "
Dude. Just no.
The comic is about fighting spam. The comments your bots are producing still are uninteresting spam, pretending to be interesting. Worse spam in other words. But go ahead, ask Randall, whether he thinks you accomplished that mission.
The comic says the mission is accomplished when bots are made that create “automated helpful comments” which are upvoted by others. That is exactly what this is. This is the endgame which the main character approves of. As for whether the comments are pretending to be individually written by a human, the xkcd comic already presumes they are pretending — the relevant part is whether people upvote them more than other comments. People who unlike you evaluate them on their actual content, and not coming from an HN link with a score to settle no matter the content.
You are just flailing around having lost and notably did not even attempt to refute a SINGLE difference I carefully listed between this and SPAM. That’s very telling!
This kind of tit-for-tat flamewar is against HN's rules and you broke both the site guidelines badly. We ban accounts that do that. If you'd please review https://news.ycombinator.com/newsguidelines.html and avoid this in the future, we'd appreciate it. We want curious conversation here.
This kind of tit-for-tat flamewar is against HN's rules and you broke both the site guidelines badly. We ban accounts that do that. If you'd please review https://news.ycombinator.com/newsguidelines.html and avoid this in the future, we'd appreciate it. We want curious conversation here.
With MOOCs, learning outcomes are highly sensitive to the collaborative discourse culture which develops. And to student experiences early in the course. But time is short. If setting up culture, modeling and correcting and tuning, takes you a week or few, you've lost scarce time and had suboptimal onboarding.
So one strategy is to seed course discussion forums with exemplar content, rather than starting them empty. So there's an existing "established" culture and norms to be read and adapted to.
LLMs might help with such. Also more generally - discourse ecology gardening. Moderation, and extrapolations of current bots, but also peer-ish roles, and neighbor-norming, and "avoid things falling through the cracks due to limited available human attention" caretaking.
Edit: also very few pictures and no videos that I could see.