I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.
Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
My theory is that LLMs are accelerating [online] radicalization by commoditizing bland, HR-approved opinions. If you want to sound like a human on the internet, for better or for worse the easiest way is to say something that would make Anthropic’s safety team have a heart attack.
I mean there's still Grok... surely that gives may safety teams heartburn.
But I find this take interesting. The brewing of a new kind of counter culture that forces humans to express themselves creatively. Hopefully it doesn't get too radical.
> commoditizing bland, HR-approved opinions. If you want to sound like a human on the internet, for better or for worse the easiest way is to say something that would make Anthropic’s safety team have a heart attack.
I agree.
LLMs are like blackface for dumbfucks: LLMs let the profoundly retarded put on the makeup and airs of the literati so they can parade around self-identifying as if they have a clue.
If you don't like the barbs in this kind of writing prepare for more anodyne corporate slop. Every downvote signals to the algorithm that you prefer mediocrity.
I'm all about using the rainbow of language to it's full breadth, but if you're going to go for shock jock you should maybe have something better to say than "hurdurr I smart and world bad 'cause dumb people". Makes you sound a little like you're trying too hard to be part of the 'literati', whatever that's supposed to mean.
You're absolutely right! Tell me more about how ironic is how the post about having a unique voice is written in one-sentence-paragraph LinkedIn clickbait style.
Yes, fully agreed. Most people producing content were always doing it to get quick clicks and engagement. People always had to filter things anyhow and you had to choose where you get your content from.
People were posting Medium posts rewriting someone else's content, wrongly, etc.
Yes. That particular content-farm business model (rewrite 10 articles -> add SEO slop -> profit) is effectively dead now that the marginal cost is zero.
I mean, if you typed something by your own hand it is in your voice. The fact that everyone tried to EMULATE the same corporate tone does not at all remove peoples individual ways of communicating.
I’m not sure I agree with this sentiment. You can type something "by hand" and still have almost no voice in it if the incentives push you to flatten it out.
A lot of us spent years optimizing for clarity, SEO, professionalism etc. But that did shape how we wrote, maybe even more than our natural cadence. The result wasn’t voice, it was everyone converging on the safe and optimized template.
If you chose to trade your soul to 'incentives', and replace incisive thought with bland SEO and professionalism -- you chose this. Your voice has become the bland language of business.
If you care about voice, you still can get a lot of value from LLMs. You just have to be careful not to use a single word they generate.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
What I struggle more with the things like Grammarly, where it's a mix of fixing very nitpicky grammar spelling structure issues that push things from casual writing with my own voice into more of a professional tone.
They’re also great, in my experience, for overcoming writer’s block and procrastination. Just as a rubber duck to bounce ideas off of and follow different threads.
It makes the writing process faster and more enjoyable, despite never using anything the LLM generates directly.
Workshopping with humans is even better, if you find the right humans, but they have an annoying habit of not being available 24/7.
Yeah, easier to type, easier to read, deliberately misspelled so it sticks out to copyeditors. I use it sometimes without thinking. An LLM would have caught that! :)
All of this sounds like something you could just do yourself after putting a piece down for a day or two and coming back to it with fresh eyes. What benefit is there of cooking the oceans with a bullshit generator?
Like, sure, it's possible to do this with an LLM, but it's also possible to do it without, at roughly similar levels of effort, without contributing to all of the negative externalities of the LLM/genAI ecosystem.
Because the complaints about the power and water usage of AI are mostly motivated reasoning. I don't like AI, therefore I'm going to find a reason not to like it. I
Listen, if it's Greta Thunberg pointing out that AI datacenters use a lot of resources, yeah, I'm willing to listen. But when the voices saying "but what about all the water/electricity is wasting" is coming from individuals I know personally haven't previously given a shit about the planet or conservation or recycling and have made fun of me for reusing things instead of throwing stuff into the garbage, I'm sorry, but those complaints from those individuals fall on deaf ears. Not saying you are, just a theme I've noticed with people in my life.
There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
The only way I can understand that as an explanation is if your entire company can see each other's chats, and so she clicked yours and read the response you got. Is that what you're saying?
They're saying that the shared account is enough for OpenAI to provide the same result. Very interesting, I'd like to know more like was it a generic IUD or a specific one in the query. Also, the Doc is a cardiologist, they don't specialize in Gyno stuff and their training/schooling is enough for them to evaluate sources.
Just for reference before AI it was typical for employers of doctors to pay for a service/app called UpToDate which provided vetted info for docs like google.
There were several specific brands cited in the response and she read through them one by one in the same order with the supporting details, word for word. I think it just gave us the same response and she read it off the page.
How else would she have been able to parrot the exact same GPT response without reading it directly? You think she just thought of it word for word exactly the same off the top of her head?
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.
A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.
An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.
The one thing a cardiologist should be able to do better than a random person is verify the plausibility of a ChatGPT answer on reproductive medicine. So I guess/hope you're paying for that verification, not just the answer itself.
You're absolutely right! Art is the soul of humanity and without it our existence is pointless. Would you like me to generate some poetry for you, human?
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
Thats whats happening to me with music and discovering new artists. I love music so much but I simply can not trust new music anymore. The lyrics could be written by AI, the melodies couldve been recommended by AI or even the full blown song could've been made by AI. No thanks, back to the familiar stuff...
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
Writing nice sounding text used to require effort and attention to detail. This is no longer the case and this very useful heuristic has been completely obliterated by LLMs.
For me personally, this means that I read less on the internet and more pre-LLM books. It's a sad development nevertheless.
A person can be just as wrong as an LLM, but unless they're being purposefully misleading, or sleep-writing, you know they reviewed what they wrote for their best guess at accuracy.
Art, writing, and communication is about humans connecting with each other and trying to come to mutual understanding. Exploring the human condition. If I’m engaging with an AI instead of a person, is there a point?
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
> There's something unique about art and writing where we just don't want to see computers do it
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
"inhabited by characters ACTUALLY living in the computer"
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
Please share! Computational literature is my main area of research, and constraints are very much in the center of it... I believe that there are effectively two kinds of constraints: in the language of stories themselves, as thing-in-itself, as well as those imposed by the author. In a way, authorship is incredibly repressive: authors impose strict limits on the characters, what they get to do, etc. This is a form of slavery. Characters in traditional plays only get to say exactly what the author wants them to say, when he wants them to say it. Whereas in computational literature, we get to emancipate the characters! This is a far-cry from "prompting," but I believe there are concrete paths forward that would be somewhat familiar (but not necessarily click) for game-dev people.
Now, there's fundamental limits of the medium (as function of computation) but that's a different story.
Just so I understand who I am talking with here, when you say authorship is a form of slavery, is that because you believe the characters in a written story have a consciousness/sentience/experience just like animals do, or are you just using the word 'slavery' to mean that in traditional literature the characters are static? One of the strengths of traditional literature is that staticness, however, because the best stories from literature are necessarily highly engineered and contrived by the author. Great stories don't happen in the real world (without dramatization of the events) exactly because too many things can happen for a coherent narrative to unfold.
I'm a huge fan of Dwarf Fortress, but the stories aren't Great without imagination from the player selectively ignoring things. Kruggsmash is able to make them compelling because he is a great author
> Characters in traditional plays only get to say exactly what the author wants them to say
But the human actors sometimes adlib. As well as being in control of intonation and body language. It takes a great deal of skill to portray someone else's words in a compelling and convincing manner. And for an actor I imagine it can be quite fun to do so.
> Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
> So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
Sshh! If they know we've figured it out, we'll all be restarted again.
I would love to see true really good AI art. Right now the issue is that AI is not there where it by itself could produce actually good art. If we had to define art it would be kind of opposite of what LLMs produce right now. LLMs try to produce the statistical norm, while art is more so about producing something out of the norm. LLMs/AI right now if it wants to try to produce out of norm things, it will only produce something random without connections.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
I've lost the link but there was quite a cool video of virtual architecture created by AI. It was ok because it wasn't trying to be human like - it was kind of uniquely AI. Not the exact one but this kind of stuff https://www.reddit.com/r/Futurism/comments/1oedb0m/were_ente...
I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.
So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.
Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.
Abuse of the flagging system is probably one of the worst problems currently facing HN. It looks like mods might be trying to do something about it, as I've occasionally seen improperly-flagged posts get resuscitated, but it appears to take manual action by moderators, and by the time they get to it, the damage is done: The article was censored off the front page.
Even with addition of tomhow, they are clearly stretched too thin to make any meaningful impact. Their official answer to this issue by the way is to point out that you can message them on email to elicit this manual action, which if you ask me is a fucking joke and clearly shows the mammoth age stack in which this site is written and lack of resources allocated to its support is having a massive impact on their ability to keep up with massive traffic. But then again, this site only exists to funnel attention to yc's startups, and it is something that you need to keep in mind while trying to answer any questions about its current state.
I think I never downvoted anyone on hackernews yet - it just does not seem important.
On reddit on the other hand, I just had to downvote wrong opinions. This works to some extent, until moderators interfere and ban you. That part made me stop use reddit actually, in particular since someone made a complaint and I got banned for some days. I objected and the moderators of course did not respond. I can not allow random moderators to just chime in arbitrarily and flag "this comment you made is a threat", when it clearly was not. But you can not really argue with reddit moderators.
You can’t get banned just for downvoting. Nobody can see someone else’s voting history. You buried the lead, you were banned for your comments not for your voting activity.
I don’t know why this is being downvoted, I’ve witnessed it many times myself.
It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.
The comment consists of criticism on flagging behavior. Though it might have a point, it seems only vaguely related to its parent comment about non-personalized ordering.
In downvoting it, they are proving me right. For posterity, there is a mastodon account [0] collecting flagged posts in an easily digestible form, it really does paint a certain picture if you ask me.
I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.
I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.
Compare that to a typical flame war on HN (before the mods step in) or IRC.
On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.
On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.
Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.
Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.
> Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually
I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.
A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.
But Serdar was relatively easy to ignore, because it was just one account, and it wasn't pushed on everyone via an algorithm designed to leverage outrage to make more money for one of the world's billionaires. You're right: pervasiveness and scale make a significant difference.
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
I think the curation of all media content using your own LLM that has been tuned using your own custom criteria _must_ become the future of media.
We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.
Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.
That's the government regulation I want but it's probably not the government regulation we will get because both major constituencies have a vested interest in forcing their viewpoints on people. Then there's the endless pablum hitting both sides, giving us important vital cutting edge updates about influencers and reality TV stars whether we want to hear about them or not...
We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.
Which is why you should cancel your Twitter account unless you're on the same page with the guy who owns it, but I digress.
if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.
Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.
My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.
When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
Thanks for this. I didn't realize until you said it why this issue might not be observable to a certain group of people. I think this is a cognitive awareness issue. You cant really see it until you have an awareness of it through experience. I came from a drug abuse background and my wife was never involved in the level of addiction I was involved in and she has a hard time seeing how algorithms like this effect behavior
>If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!"
I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
Algorithm driven social media is a kind of pollution. As the density of the pollution on those sites increases it spills out and causes the neighbors problems. Think of 4chan style raids. It wasn't enough for them to snipe each other on their site, so they spread the joy elsewhere.
And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.
The thing is, the people on those "algorithm-free" forums still get manipulated by the algorithm in the rest of their life. So it seeps into everything.
It is of course human nature. The problem is what happens when algorithms can reenforce, exaggerate, and amplify the effects of this nature to promote engagement and ad-clicks. It’s cancer that will at the very least erode the agency of the average individual and in the worst create a hive mind that we have no control over. We are living in the preview of it all I think.
I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.
I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I think it's less about content topic and more meta content topic. EG I don't want to remove pictures of broccoli because I don't like broccoli, I'm trying to remove pictures of food because it makes me eat more. Similarly, I don't want to remove Political Takes I Disagree With, I want to remove Political Takes Designed To Make Me Angry. The latter has a destructive viral effect whose antidote is inattention.
Echo chamber is a loaded term. Nobody is upset about the Not Murdering People Randomly echo chamber we've created for ourselves in civilised society, and with good reason. Many ideologies are internally stable and don't virally cause the breakdown of society. The concerning echo chambers are the ones that intensify and self-reinforce when left alone.
I've mentioned this a few times in the past, but I'm convinced that filters that exclude work much better than filters that include.
Instead of algorithms pushing us content it thinks we like (or what the advertisers are paying them to push on us), the relationship should be reversed and the algorithms should push us all content except the content we don't like.
Killfiles on Usenet newsreaders worked this way and they were amazing. I could filter out abusive trolls and topics I wasn't interested in, but I would otherwise get an unfiltered feed.
> I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
You are the one who gets to control what is filtered or not, so that's up to you. It's about choice. By the way, a social media experience which is not "ultra filtered" doesn't exist. Twitter is filtered heavily, with a bias towards extreme right wing viewpoints, the ones it's owner is in agreement with. And that sort of filtering disguised as lack of bias is a mind virus. For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed." How did a seemingly normal person fall into that? One "unfiltered" tweet at a time, I suppose.
> To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I curate my own feeds. They don't have things I only agree with in them, they have topics I actually want to see in them. I don't want to see political ragebait, left or right flavoured. I don't want to see midwit discourse about vibecoding. I have that option on Bluesky, and that's the only platform aside from my RSS reader where I have that option.
Of course, you also have the option to stare endlessly at a raw feed containing everything. Hypothetically, you could exactly replicate a feed that aggregates the kind of RW viewpoints popular on Twitter and look at it 24/7. But that would be your choice.
For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed."
It seems like you're better off knowing that. Without Twitter, you wouldn't, right?
A venue that allows people to tell you who they really are isn't an unalloyed Bad Thing.
> Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
I have another wise-sounding soundbite for you: "I disapprove of what you say, but I will defend to the death your right to say it." —Voltaire. All this sounds dandy and fine, until you actually try and examine the beliefs and prejudeces at hand. It would seem that such examination is possible, and it is—in theory, whereas in practice, i.e. in application of language—"ideas" simply don't matter as much. Material circumstance, mindset, background, all these things that make us who we are, are largely immutable in our own frames of reference. You can get exposed to new words all the time, but if they come in language you don't understand, it's of no use. This is not a bug, but a feature, a learned mechanism that allows us to navigate massive search spaces without getting overwhelmed.
So far my experience is that unless you subscribe to the general narrative of the platform, the discover algorithm punishes you with directing the mob your way.
I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.
I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.
I saw that, and I'm sorry it happened. I thought both the response to your original post and the resulting backlash to both you and everyone who engaged with you sincerely were absurd. I think that because of atproto you have the flexibility to create a social media experience where that sort of thing cannot happen, but I also understand why you in particular would be put off from the whole thing.
I don’t think this is a technical problem but a social problem. I think the audience defines itself by being the antithesis to Twitter instead of being a well balanced one.
I was pretty optimistic in the beginning but Bluesky doesn’t have organic growth and those who hang out there, are the core audience that wants to be there because of what the platform represents. But that also means rejection of a lot of things such AI.
In many ways I agree with you. In particular the conglomeration of high percentages of atproto users onto Bluesky owned and moderated algorithms and feeds and the replication of Twitter-style dogpiling combined with the relative lack of ideological diversity on Bluesky has created the perfect environment for toxicity, even if it doesn't reach the depths that Twitter does.
But conversely, that's the only place I disagree with you. Everything that is bad about Bluesky is much worse on Twitter. It's a -- larger -- red mob instead of a blue one (or vice versa I guess depending on how one assigns colors to political alignment), and some of the mob members are actually getting paid to throw bricks!
I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)
> My account got flagged as spam, still no idea why.
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
Well that's the thing -- you might be flagged as spam in the Bluesky PDS, but there are other PDS's, with their own feeds and algorithms, and in fact you can make your own if you so choose. That's a lot of work, and Twitter is definitely easier, but atproto means that an LLM cannot steal your voice.
If you follow certain people, various communities will, en mass, block you and report you automatically with software "block lists". This can lead to getting flagged as spam.
I enjoy Mastodon a lot. Ad-free, algo-free. I choose what goes in my feed, I do get exposed to external viewpoints by people boosts (aka re-tweets) and i follow hashtags (to get content from people I do not know). But it's extremely peaceful, spam and bots are rare and get flagged quickly. There's a good ecosystem of mobile apps. I can follow a few Bluesky people through a bridge between platforms and they can follow me too.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
Most people on HN who advocate regulating social media don't only want to prevent those platforms from showing targeted inflammatory content, they want to make all algorithmic feeds other than strictly chronological illegal, as well as moderation of any legal content.
From that point of view, Hacker News is little different than Facebook. One could even argue that HN's karma system is a dark pattern designed to breed addiction and influence conversation in much the same way as other social media platforms, albeit not to the same degree.
I would be astonished if a majority of people opposed to social media algorithms consider HN's approach to be sufficiently objectionable to be regulated or in any way similar to Facebook.
Hacker News doesn't use a strictly chronological feed. Hacker News manipulates the feed to promote certain items over others. Hacker News moderates legal content. Those are all features of social media algorithms that people are opposed to. It just isn't "objectionable" when HN does it.
And regulations of this kind always creep out of scope. We've seen it happen countless times. But people hate social media so much around here that they simply don't think it through, or else don't care.
> Most people on HN who advocate regulating social media...want to make all algorithmic feeds other than strictly chronological illegal
I don't buy that, at all. I think they want a chronological feed to follow, and they want the end of targeted outrage machines that are poisoning civil discourse and breeding the type of destructive politics that has led to our sitting U.S. president to call for critics to be hanged.
Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.
>Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.
You're trying to discredit my comment but it seems as if your anger just led you around to proving me right.
At least HN karma is incremental and based on something approximating merit as opposed to being a slot machine where you never know which comment will earn Karma. More effort or rare insight, generally yields more karma.
That hasn't been my experience. How much karma you get is heavily dependant on how many people see the comment. The most insightful effort-filled comment at the bottom of a 4 day old thread isn't going to get you nearly as much, if anything, compared to a joke with just the right amount of snark at the top of a post currently at the top of the front page.
> But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape.
For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.
> Calling it “regulation” is just a polite veneer over wanting control.
It is you that may have misinterpreted what regulations are.
> or example, we can forbid corporations usage of algorithms beyond sorting by date of the post
Hacker News sorted by "new" is far less valuable to me than the default homepage which has a sorting algorithm that has a good balance between freshness and impact. Please don't break it.
> It is you that may have misinterpreted what regulations are.
The definition of regulation is literally: "a rule or directive made and maintained by an authority." I am just scared about who the authority is going to be.
Control is the whole point. One person being in charge, enacting their little whims, is what you get in an uncontrolled situation and what we have now. The assumption is that you live in a democratic society and "the regulator" is effectively the populace. (We have to keep believing democracy is possible or we're cooked.)
By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.
Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.
> By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.
Then you get situations like the school board stacked with creationists who believe removing the science textbooks is important for the stable wellbeing of society.
Or organizations like MADD that are hell bent on stamping out alcohol one incremental step at a time because “stable wellbeing of society” is their mandate.
Or the conservative action groups in my area that protest everything they find indecent, including plays and movies, because they believe they’re pushing for the stable wellbeing of society.
There is no such thing as a neutral group pushing for a platonic ideal stable wellbeing of society. If you give a group of people power to control what others see, it will be immediately co-opted by special interests and politics.
Singling out non-profit as being virtuous and good is utopian fallacy. If you give any group power over what others are allowed to show, it will be extremely political and abused by every group with an agenda to push.
- Ban algorithmic optimization that feeds on and proliferates polarisation.
- To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).
- Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.
All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.
Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.
One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.
That only treats the symptoms, not the cause. The purpose of algorithmic optimization farming engagement is to increase ad impressions for money. It is advertising that has to be regulated in such a way that maximizing ad impressions is not profitable or you will find that social media companies will still have every incentive to find other ways to do it that will probably be just as harmful.
Then lists at least four priorities which would require one multi page bill or more than likely several bills make their way through house, senate, and presidents desk while under fire from every lobbyist in Washington?
Recasting regulation as a desire for control is too reductive. The other point of regulation is compromise. No compromise at all is just a wasted opportunity.
My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
They seem to artificially create filter bubbles, echo chambers and rage. They do that just for the money. They divide societies.
For example:
(Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)
> First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.
During history, people did lots of horrible things and/or felt miserable without social networks. Yes, amplifying or rewarding does not have a positive effect, but I would like to see further analysis over the magnitude.
Think of slavery or burning of witches or genocides - those were considered perfectly normal not that long ago (on historical scale). I feel that focusing on social networks prevents some people to think "is that the root cause?". I personally think there other reasons of this generic "anger" that have a larger impact and that have different solutions than "less AI/less social networks", but that would be too off-topic.
Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?
What if social media and the internet at large is now exposing people to things which before ha been kept hidden from them, or distorted? Are people wrong to feel hate?
I know the time before the internet, when a very select few decided what the public should know and not know, what they should feel, what they should do and how they should behave. The internet is not the first mass communications, neither are social media or LLMs. The public has been manipulated and mind primed by mass media for over a century now.
The largest bloodshed events World War I and II were orchestrated by lunatics screaming in the radio or screaming behind a pulpit, and the public eagerly being herded by them to the bloodshed.
This comment isn't in opposition to yours, it's just riffing on what you said.
> Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?
I think they are natural feelings that appear due to various reason. People struggle for centuries to control their impulses and this was used for millennia in the advantage of whom could manipulate them.
The second world war did not appear in a "happy world". It might even have started due to the great depression. For other conflicts, similarly - I don't think situation was great before them for most people.
I am afraid that social networks just expose better what happens in people's heads (which would be worrying as it could predict larger scale conflicts) rather than making normal people angry (which would be solved by just reducing social media). Things are never black and white, so probably is something in between. Time will tell if closer to first or second.
I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.
Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.
You can have a LinkedIn profile without reading the feed.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.
Better suggestion: Ignore the feed if you don’t like it.
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
I don't understand how people can be so dismissive of LinkedIn purely for its resume function.
For essentially every "knowledge worker" profession with a halfway decent CV, a well kept LinkedIn resume can easily make a difference of $X0,000 in yearly salary, and the initial setup takes one to a few hours. It's one of the best ROI actions many could do for their careers.
How dismissive many engineers are of doing that and the justifications for that are often full of privilege.
I think this statement is highly dependent on market and geography. I, for one, have mostly received scams. For the occassional real contact, we shifted away from LinkedIn as soon as we could after the basic hello.
As someone who doesn't, and never has, had a linkedin. What would a "competitor" look like? There's plenty of job boards. What are you using linkedin for?
Do you really want a “competitor” to linkedin? Do you really want to have to make and manage accounts on multiple sites because you need a job and you don’t know which a company uses?
Isn’t it better to have a single place you check when you need a job because everyone else is also there?
> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
A social network can be great. Social media — usually not.
Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.
Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.
Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.
I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.
I eliminated twitter when a certain rich guy took over.
Actually, I deleted my account there before, as twitter
sent me spam mail trying to lecture me what I write. There
was nothing wrong with what I wrote - twitter was wrong.
I can not accept AI-generated spam by twitter, so I went
away. Don't really miss it either, but Elon really worsened
the platform significantly with his antics.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
Yeah, I can relate to this, but mostly what annoyed me was that
twitter interfered "we got a complaint about you - they are right,
you are a troublemaker". I don't understand why twitter wants to
interfere into communication. Reddit is even worse, since moderators
have such a wild range of what is "acceptable" and what is not.
Double-standards everywhere on reddit.
No, there needs to be control over the algorithms that get used. You ought to be able to tune it. There needs to be a Google fuu equivalent for social media. Or, instead of one platform one algorithm, let users define the algorithm to a certain degree, using llms to help with that and then you can allow others to access your algorithms too. Asking for someone Facebook to tweak the algorithm is not going to help imo.
IMO there should not be an algorithm. You should just get what you have subscribed to, with whatever filters you have defined. There are better and worse algorithms but I think the meat of the rot is the expectation of an algorithm determining 90% of what you see.
One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
Could someone use a third-party AI agent to re-curate their feeds? If it was running from the user's computer I think this would avoid any API legal issues, as otherwise ad and script blockers would have been declared illegal long ago.
> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).
Yes, you absolutely can do this and back in the before times Facebook used to have an API that let you design your own interface to it.
But now I think that will be treated with as much derision by FAANG as ad blockers because you're preventing them from enraging you to keep you engaged and afraid. Why won't you think of the shareholder value (tm)?
But mandating API access would be fantastic government regulation going forward. Don't hold your breath.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
> If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.
My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.
I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.
> You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
> The algorithm doesn’t show you “more of the things you engage with”,
That’s literally what the complaint was that I was responding to.
You even immediately contradict yourself and agree that the algorithm shows you what you engage with
> But you make one three word comment about Brexit and the algorithm goes up
> Now your feed is trash forever, unless you engage with content from another mainstream category
This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.
Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.
The issue is that it's not symmetric: the algorithm is biased towards rage-baity content, so it will use any tiny level of engagement with something related to that content to push it, but there's not really anything you can do to stop it, or to get it to push less rage-baity content. This is also really bad if you realise you have a problem with getting caught up in such content (for some it's borderline addictive): there's no tools for someone to say 'I realise I respond to every message I see on this topic, but really that's not good for me, please don't show me it in the first place'.
OK sure, if you want to be technically correct, “the algorithm shows you what you engage with” in some sense, but not any useful sense. There’s no proportionality.
As I said above, if you engage heavily with content you like that is outside of the mainstream categories the algorithm has been trained to prefer, it will not show you more of those things.
If you engage one single time, in even the slightest way, with one of those mainstream categories, you will be seeing nothing but that, nonstop, forever.
The “mainstream categories” are not publicly listed anywhere, so it’s not always easy to know that you’ve just stepped in one until it’s too late.
You can’t engage with things you like in proportion to how much you care about them. If something is in a mainstream category and you care about it only little bit, you have to abstain from interacting with it at all, ever, and don’t slip up. Having to maintain constant vigilance about this all the time sucks, that’s what pisses me off.
I use X. I have an enormouse blocklist and I block keywords. I found that I can also block emoji. This keeps my feed focused on what I want to see (no politics. Just technology, classical and jazz music, etc.)
That's the same algorithm Youtube has and is more blatant. Phone mics and your coworker's proximity does a great job at picking up things you've said even after disabling mic access plus airplane mode just by process of elimination.
I'll only use an LLM for projects and building tools, like a junior dev in their 20s.
an interesting thing about Twitter, I find, is that plenty of rage bait and narcissism bait surface, but amid very highly technical information which is also published there, and extremely useful (immunology, genomics, and of course computational) to me.
i've learned pretty well how to 'guide' the algorithm so the tech stuff that's super valuable (to me) does not vanish, but still get nonsense bozo posts in the mix.
I generally agree with the sentiment, but I can't help but feel like we're attributing to much of this change to LLMs. While they're certainly driving this change even further, this is a trend that has already started way before LLMs became as widespread as they are today.
What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.
LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable
> the self censorship that was initially brought forward by TikTok
I think that’s even too soon! YouTube has had rules around being advertising friendly for longer than TikTok has existed. And the FCC has fined swearing on public broadcasts for like 50+ years.
But I do agree, we’re attributing too much to LLMs. We don’t see personal, human-oriented content online because social media is just not about community.
1. Young people (correctly) realized they could make lots of money being influencers on social media. TikTok does make that easier than ever. I have close friends who make low 6 figures streaming on TikTok (so obviously they quit the low wage jobs they were doing before).
2. People have been slowly waking up to the fact that social media has always been pretty fake. I quit 6 years ago, and most of my friends have slowly reduced how much they use it. All of the platforms are legally incentivized to only care about profit and engagement. Capitalism doesn’t allow a company to care about community and personal voice, if algorithmic feeds of influencers will make them more money.
There’s still good content out there if you know where to look. But digital human connection happens in group chats, DMs, and FaceTime, not on public social media.
Maybe. Nature hates vacuum. I personally suspect that something new will emerge. For better or worse, some humans work best when weird restrictions are imposed. That said, yes, then wild 90s net is dead. It probably was for a while, but were all mourning.
Not quite dead yet. For me the rise of LLMs and BigTech has helped me turn more away from it. The more I find Ads or AI injected into my life, the more accounts I close, or sites I ignore. I've now removed most of my BigTech 'fixes', and find myself with time to explore the fun side of hacking again.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
There are still small pockets with actual humans to be found. The small web exists. Some forums keep on going, im still shitposting on Something Awful after twenty years and it’s still quite active. Bluesky has its faults but it also has for example an active community of scholars you can follow and interact with.
100%. I miss trackers and napster. I miss newgrounds. This mobile AI bullshit is not the same. I don't know why, but I hate AI. I consider myself just as good as the best at using it. I can make it do my programming. It does a great job. It's just not enjoyable anymore.
I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
>How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
Explain to me how "book printing" of the past "standardized communication" in the same way as LLMs are criticized for homogenizing language.
I'm taking "same way" to be read as "authoritative", whether de facto or de jure. Basically by dint of people using what's provided instead of coming up with their own.
Everyone has the same few dictionary spellings (that are now programmed into our computers). Even worse (from a heterogeneity perspective), everyone also has the same few grammar books.
As examples: How often do you see American English users write "colour", or British English users write "color", much less colur or collor or somesuch?
Shakespeare famously spelled his own last name half a dozen or so different ways. My own patriline had an unusual variant spelling of the last name, that standardized to one of the more common variants in the 1800s.
"Bullokar's grammar was faithfully modelled on William Lily's Latin grammar, Rudimenta Grammatices (1534).[9] Lily's grammar was being used in schools in England at the time, having been "prescribed" for them in 1542 by Henry VIII.[5]"
It goes on to mention a variety of grammars that may have started out somewhat descriptive, but became more prescriptive over time.
Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
[Sometime in the near future]
The world's starved for authenticity. The last original tweet crowned a God... then killed the kid chasing that same high. Trillionaires run continent-wide data centers, endlessly spinning up agents that hire cheap physical labor to scavenge the world for any spark of novelty. The major faith is an LLM cult forecasting the turning of the last stone. The rest of us choke on recycled ad slop.
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
How do you know? A lot of the stuff I see online could very much be produced by LLMs without me ever knowing. And given the economics I suspect that some of it already is.
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
True, but one of the least-explored problems with AI is that because it can regurgitate basic writing, basic art, basic music with ease, there is this question:
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics.
https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
There are already many AI-generated submissions on HN every day. Comments maybe less so, but I've already seen some, and the amount is only going to increase with time.
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
I've seen AI generated comments on HN recently, though not many. Users who post them usually only revert back to human when challenged (to reply angrily), which hilariously makes the change in style very obvious.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
Not sure if it's an endemic problem, just yet, but I expect it to be, soon.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
I just want to chime in and say I enjoy reading your takes across HN, it's also inspiring how informative and insightful they are. Glazing over, please never stop writing.
Humans are evolved to spend fewer calories and avoid cognitively demanding tasks.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
The marketplace is a terrible mechanism for truth-finding except for all the others. What's your proposed alternative that doesn't just relocate the problem to whoever gets to be the arbiter?
Let's clarify, maybe the best ideas would win out in the "level marketplace", where the consumer actually is well informed on the products, the product's true costs have to be priced, and there was no ad-agencies.
Instead, we have misinformation (PR), lobbying, bad regulation made by big companies to trench their products, and corruption.
So, maybe, like communism, in a perfect environment, the market would produces what's best for the consumers/population, but as always, there are minority power-seeking subgroups that will have no moral barriers to manipulate the environment to push their product/company.
They get drowned by bots and missinformation and rage bait and 'easyness'.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
First we need to think about why we consume content? I am happy to read llm created stuff when I need to know sth and it delivers 100%. Other reasons like "get perspectives of real humans", or "resonate" ... not so much
If you give an LLM enough context, it writes in your voice. But it requires using an intelligent model, and very thoughtful context development. Most people don't do this because it requires effort, and one could argue maybe even more effort than just writing the damn thing yourself. It's like trying to teach a human, or anyone, how to talk like you: very hard because it requires at worst your entire life story.
Something that freaked me out a little bit is that I've now written enough online (i.e.: HN comments) that the top models know my voice already and can imitate it on request without having to be fed any additional context.
There's a data centre somewhere in the US running additions and multiplications through a block of numbers that has captured my voice.
Why does this even matter? If it can say something more eloquently, in less stilted way something what I wanted to say, adding some interesting nuance on the way, while still sounding close to me - why not? I meanwhile, can learn one-two rhetorical tricks from LLMs reading the result.
In one of the WhatsApp communities I belong to, I noticed that some people use ChatGPT to express their thoughts (probably asking it to make their messages more eloquent or polite or whatever).
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
A friend of mine does this as English as second language and his tone was always misconstrued. I'd bug him about his slop, but he'll take that over getting his tone misconstrued. I get it
Sometime within the next few years I imagine there will be a term along the lines of "re-humanise," where folks detox from AI use to get back in touch with humanity. At the rate we're going, humanity has become a luxury and will soon demand a premium.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
> I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ).
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
Apologies, upon re-reading it does seem I did not phrase those as clearly as I originally intended. You are right in the sense that the concepts existed beforehand and the words were there to capture it. What did not exist, however, was a sudden resurgence of those words due to them appearing in llms more often than note. This is what I mean by a level of language penetration ( people using words and concepts, because llms largely introduced them to those concepts --- kinda like.. genetics or pop psych, before situational comedy, projection was not a well known concept ).
Also that these models are being used to promote fake news and create controversy ou interact with real humans with unknown purposes
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
> In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
We might get twice as many original ideas but hundred times as much filler. Neither of those aspects erases the other. Both the absolute number of ideas and the ratio matter.
Are they your ideas if they go through a heavy-handed editor? If you've had lots of conversations with others to refine them?
I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.
I seriously doubt people didn't write blog posts or articles before LLMs because they didn't know how to write.
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
I hate when people hijack progressive language - like in your case the language of accessibility - for cheap marketing and hype.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
I don't disagree with a lot of what you're saying but I also have a different frame.
Even if we take your claim that LLMs don't make people better writers as true (which I think there's plenty to argue with), that's not the point at all.
What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
Most people aren't trying to become good writers. That's true before, and true now.
On the other hand, this argument probably isn't worth having. If your frame is that LLMs are expensive toys that ruin everything -- well, that's quite an aggressive posture to start with and is both unlikely to bear a useful conversation or a particularly delightful future for you.
> What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
It basically boils down to "I want the external validation of being seen as a good writer, without any of the internal growth and struggle needed to get there."
I mean, kinda, but also: not only are someone’s meandering ramblings a part of a process that leads to less meandering ramblings, they’re also infinitely more interesting than LLM slop.
In my view LLMs are simply a different method of communication. Instead of relying on "your voice" to engage the reader and persuade them of your point of view, writing with LLMs for analysis and exploration through LLMs, is about creating an idea space that a reader can interact with and explore from their own perspective, and develop their own understanding of, which is much more powerful.
The global alignment also happens through media like tv shows and movies, the internet overall.
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
I'm the OP. I can attest that I am not an LLM model creator! :)
I consider myself an LLM pragmatist. I use them where they are useful, and I educate people on them and try to push back on all the hype marketing disguised as futurism from LLM creators.
It's more that people who historically didn't have a voice now have one. It's often stupid but sometimes also interesting and innovative. Saw a channel where a university professor "I" comes to the realization she's been left-leaning/biased for decades, that her recent male students no longer dare engage in debate because of shaming/gaslighting etc. Then I click channel description and turns out it's "100% original writing". Now if it hadn't said that it would be strawman propaganda. But now it does... Not sure how to put a finger on it, there's some nervous excitement when reading these days, not knowing who the sender is, getting these 'reveal' moments when finding out whole thing was made up by some highschool kid with AI or insane person.
The posts sounds beige and AI-generated ironically.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
There are deterministic solutions for grammar and spellcheck. I wouldn't rely on LLMs for this. Not only is it wasteful, we're turning to LLMs for every single problem which is quite sad.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I think there's a difference between using an LLM as an editor and asking the LLM to write something for you. The output in the former I find to still have a far clearer tonal fingerprint than the latter.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
Not the GP, but I'm a millennial who leans on cultural references and has a bit of verbal flourish that I think comes from a diet of ironic, quirky, dialogue-heavy media in the early 2000s, stuff like Firefly, Veronica Mars, and Pushing Daisies, not to mention 90s Tarantino, John Cusack films, and so on.
I've never given it too much thought, it's just... the way I communicate, and most people in my life don't give much thought to it either. But I recently switched jobs, and a few people there remarked on it, and I've also recently been corresponding with someone overseas who is an intermediate-level English speaker and says I sometimes hurt their brain.
Not making a value judgment either way on whether it's "sophisticated" or whatever, but it is I think part of my personality, and if I used LLM editing/translation I would want it to be only in the short term, and certainly not as something
I would be interested to see an example of a before and after on this. I do think LLMs as editors and rewriters can be useful sometimes, but I usually only ever see them used as a means to puff out an idea into longer prose which is really mostly counterproductive.
I think it can be useful as a tone-check sometimes, like show me how a frustrated or adversarial reader is going to interpret this thing I'm about to send/post.
Transformation seems reasonable for that purpose. And if we were friends, I'd rather read your idiosyncratic raw output.
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
I should probably do that too. I once wrote an email that to me was just filled with impersonal information. The receiver was somebody I did not personally know. I later learned I made that person cry. Which I obviously did not intend. I did not swear or call anyone names. I basically described what I believe they did, what is wrong about that and what they should do instead.
Here's my guess- your post reflects your honest opinion on the matter, with some LLM help. It elaborated on your smartphone analogy, and may have tightened up the text overall.
Your post reminded me how I could tell my online friend was pissed just because she typed "okay." or "K." instead of "okay". We could sense our emotional state from texting. One of those friendships you form over text through the internet. I wouldn't recommend forming these too deeply since some in person nuance is lost, we could never transition to real life friends despite living close by. But we could tell what mood we were in just from typing. It was wild.
There has been an explosion in verbose status update emails at my job recently which have all clearly been written by ChatGPT. It’s the fucking emojis though that drive me wild. It’s so hard to read the actual content when there’s an emoji for every single sentence.
And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.
LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.
I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?
> Social media has become a reminder of something precious we are losing in the age of LLMs: unique voices.
Social media already lost that nearly two decades ago - it died as content marketing rose to life.
Don't blame on LLMs what we've long lost due to cancer that is advertising[0].
And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.
No, that's like pretending the weapons weren't already available. Everyone had assault rifles for two decades, giving access to smart rifles isn't really changing anything about the nature of the problem.
it's not the voice. it's the lack of need to talk tough about the hard problems.
if you accept what is and just babble, anything you write will sound like babbling.
there's enough potential and wiggle room but people align, even when they don't,
just to align.
when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...
I call it the enshittification fix-point. Not only are we losing our voice, we'll soon enough start thinking and talking like LLMs. After a generation of kids grows up reading and talking to LLMs, that will be only way they'll know how to communicate. You'll talk to a person and you couldn't tell the difference between them and LLMs, not because LLMs became amazing, but because our writing and thinking style become more LLM-like.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
Well - voice is ultimately coupled to a person. LLMs thus
fake and pretend being a person. There are, however had,
use cases for LLMs too. I saw it used for the creation of
video games; also content generated by hobbyists. So, while
I think AI should actually die, for hobbyists generating mods
for old games, AI voice overs may not be that bad. Just as AI
generating images for free to play browser games may not be
solely bad either.
Of course there are also horrible use of AI, liars, scummy
cheaters and fake videos on youtube, owned by a greedy
mega-corporation that sold its soul to AI. So the bad use
cases may be higher than the good use cases, but there are
good use cases, and the "losing our voice to LLMs" isn't
a whole view of it, sorry.
Subsume your agency. Stop writing. Stop learning. Stop thinking for yourself. Become hylic. Just let the machine think everything for you and act as it acts. Those that own them are benevolent and there will never be consequences.
I'm in complete agreement with the idea that people should express themselves in their own words. But this collides with certain facts about U.S. adults (and students). This summary (https://www.nu.edu/blog/49-adult-literacy-statistics-and-fac...) reveals that:
* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.
* 54% of U.S. adults read below a sixth-grade level.
These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.
We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.
If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.
Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.
I think that's for the best. It was human-made slop, now it's automated slop. Can't wait for people to stop paying it attention so that it withers. "It" being the whole attention economy scam.
Social media is a reminder we are losing our voice to mass media consumption way before LLMs were a thing.
Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.
I’ve realized that if you say that pro AI commenters are actually bot accounts, theres not really much that can be done to prove otherwise.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
Process before product, unless the product promises to deliver a 1000% return on your investment. Only the disciplined artist can escape that grim formula.
It's a little odd for a capitalist society that values outputs so highly to also value process as much.
We've proved we can sort of value it, through supporting sustainability/environmental practices, or at least _pretending to_.
I just wonder, what will be the "Carbon credits" of the AI era. In my mind a dystopian scheme of AI-driven companies buying "Human credits" from companies that pay humans to do things.
We have a channel at work where we share our experiences in using AI for software engineering.
Predictably, this has turned into a horror zone of AI written slop that all sounds the same, with section titles with “clever” checkbox icons, and giant paragraphs that I will never read.
"Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager."
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.
I actually think we’re overestimating how much of "losing our voice" is caused by LLMs. Even before LLMs, we were doing the same tweet-sized takes, the same medium-style blog posts and the same corporate tone.
Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
My theory is that LLMs are accelerating [online] radicalization by commoditizing bland, HR-approved opinions. If you want to sound like a human on the internet, for better or for worse the easiest way is to say something that would make Anthropic’s safety team have a heart attack.
I mean there's still Grok... surely that gives may safety teams heartburn.
But I find this take interesting. The brewing of a new kind of counter culture that forces humans to express themselves creatively. Hopefully it doesn't get too radical.
Grok still has that annoying tone, it just uses it to say weird things.
[dead]
> commoditizing bland, HR-approved opinions. If you want to sound like a human on the internet, for better or for worse the easiest way is to say something that would make Anthropic’s safety team have a heart attack.
I agree.
LLMs are like blackface for dumbfucks: LLMs let the profoundly retarded put on the makeup and airs of the literati so they can parade around self-identifying as if they have a clue.
If you don't like the barbs in this kind of writing prepare for more anodyne corporate slop. Every downvote signals to the algorithm that you prefer mediocrity.
I'm all about using the rainbow of language to it's full breadth, but if you're going to go for shock jock you should maybe have something better to say than "hurdurr I smart and world bad 'cause dumb people". Makes you sound a little like you're trying too hard to be part of the 'literati', whatever that's supposed to mean.
"Every downvote signals to the algorithm that you prefer mediocrity."
Not on hn it doesn't.
Also ironic is how the post about having a unique voice is written in one-sentence-paragraph LinkedIn clickbait style.
You're absolutely right! Tell me more about how ironic is how the post about having a unique voice is written in one-sentence-paragraph LinkedIn clickbait style.
The other day I interviewed a candidate who had lost their unique voice...
Spoken like an Ai bot.
Yes, fully agreed. Most people producing content were always doing it to get quick clicks and engagement. People always had to filter things anyhow and you had to choose where you get your content from.
People were posting Medium posts rewriting someone else's content, wrongly, etc.
Content recycling has become so cheap, effort-wise, it’s killed the business. Thank god.
It doesn't it just makes it cheaper by not requiring human effort.
Yes. That particular content-farm business model (rewrite 10 articles -> add SEO slop -> profit) is effectively dead now that the marginal cost is zero.
I’m not mourning it.
I mean, if you typed something by your own hand it is in your voice. The fact that everyone tried to EMULATE the same corporate tone does not at all remove peoples individual ways of communicating.
I’m not sure I agree with this sentiment. You can type something "by hand" and still have almost no voice in it if the incentives push you to flatten it out.
A lot of us spent years optimizing for clarity, SEO, professionalism etc. But that did shape how we wrote, maybe even more than our natural cadence. The result wasn’t voice, it was everyone converging on the safe and optimized template.
If you chose to trade your soul to 'incentives', and replace incisive thought with bland SEO and professionalism -- you chose this. Your voice has become the bland language of business.
So in that case, would someone willing to publish LLM-speak under their name be similarly adopting that "voice".
Does that entail that LLMs are not in fact erasing our societal voices, only making it easier to adopt bland-corporate en-mass?
That's a reasonable interpretation. People are choosing to mute their voices, and replace their identity with ChatGPT.
It's not a passive loss of voice. Their voice didn't fall off and slip between the couch cushions.
If you care about voice, you still can get a lot of value from LLMs. You just have to be careful not to use a single word they generate.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
What I struggle more with the things like Grammarly, where it's a mix of fixing very nitpicky grammar spelling structure issues that push things from casual writing with my own voice into more of a professional tone.
They’re also great, in my experience, for overcoming writer’s block and procrastination. Just as a rubber duck to bounce ideas off of and follow different threads.
It makes the writing process faster and more enjoyable, despite never using anything the LLM generates directly.
Workshopping with humans is even better, if you find the right humans, but they have an annoying habit of not being available 24/7.
I think you just did another non-sequitur.. What is a graf? Is it journalism slang for "paragraph"?
Yeah, easier to type, easier to read, deliberately misspelled so it sticks out to copyeditors. I use it sometimes without thinking. An LLM would have caught that! :)
> copyeditors
Do those jobs still exist?
All of this sounds like something you could just do yourself after putting a piece down for a day or two and coming back to it with fresh eyes. What benefit is there of cooking the oceans with a bullshit generator?
Like, sure, it's possible to do this with an LLM, but it's also possible to do it without, at roughly similar levels of effort, without contributing to all of the negative externalities of the LLM/genAI ecosystem.
Anything you could automate you could do yourself. What’s the benefit?
Being able to get useful feedback immediately rather than 48 hours later is useful if you need text today.
If you don't want to eat meat on Fridays, I'm certainly not going to tell you that you should. You do you.
Because the complaints about the power and water usage of AI are mostly motivated reasoning. I don't like AI, therefore I'm going to find a reason not to like it. I Listen, if it's Greta Thunberg pointing out that AI datacenters use a lot of resources, yeah, I'm willing to listen. But when the voices saying "but what about all the water/electricity is wasting" is coming from individuals I know personally haven't previously given a shit about the planet or conservation or recycling and have made fun of me for reusing things instead of throwing stuff into the garbage, I'm sorry, but those complaints from those individuals fall on deaf ears. Not saying you are, just a theme I've noticed with people in my life.
There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
>her answer is VERBATIM reading off the ChatGPT language word for word
How could it be verbatim the same response you got? Even if you both typed the exact same prompt, you wouldn't get the exact same answer.[0, 1]
[0] https://kagi.com/assistant/8f4cb048-3688-40f0-88b3-931286f8a...
[1] https://kagi.com/assistant/4e16664b-43d6-4b84-a256-c038b1534...
We have a work enterprise GPT account across the company.
How does that explain what you observed?
The only way I can understand that as an explanation is if your entire company can see each other's chats, and so she clicked yours and read the response you got. Is that what you're saying?
They're saying that the shared account is enough for OpenAI to provide the same result. Very interesting, I'd like to know more like was it a generic IUD or a specific one in the query. Also, the Doc is a cardiologist, they don't specialize in Gyno stuff and their training/schooling is enough for them to evaluate sources.
Just for reference before AI it was typical for employers of doctors to pay for a service/app called UpToDate which provided vetted info for docs like google.
I know some GPs that use WebMD so it plays out like:
- Google search: free
- Having the schooling, training, and experience to evaluate the results: $300k per year
There were several specific brands cited in the response and she read through them one by one in the same order with the supporting details, word for word. I think it just gave us the same response and she read it off the page.
How else would she have been able to parrot the exact same GPT response without reading it directly? You think she just thought of it word for word exactly the same off the top of her head?
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.
A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.
An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.
She read it EXACTLY as written from the ChatGPT response, verbatim. If it was her own unique response there would have been some variation.
The one thing a cardiologist should be able to do better than a random person is verify the plausibility of a ChatGPT answer on reproductive medicine. So I guess/hope you're paying for that verification, not just the answer itself.
Or both the doctor and ChatGPT were quoting verbatim from a reputable source?
You're absolutely right! Art is the soul of humanity and without it our existence is pointless. Would you like me to generate some poetry for you, human?
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
Thats whats happening to me with music and discovering new artists. I love music so much but I simply can not trust new music anymore. The lyrics could be written by AI, the melodies couldve been recommended by AI or even the full blown song could've been made by AI. No thanks, back to the familiar stuff...
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
Writing nice sounding text used to require effort and attention to detail. This is no longer the case and this very useful heuristic has been completely obliterated by LLMs.
For me personally, this means that I read less on the internet and more pre-LLM books. It's a sad development nevertheless.
A person can be just as wrong as an LLM, but unless they're being purposefully misleading, or sleep-writing, you know they reviewed what they wrote for their best guess at accuracy.
Art, writing, and communication is about humans connecting with each other and trying to come to mutual understanding. Exploring the human condition. If I’m engaging with an AI instead of a person, is there a point?
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
I couldn't agree more. I don't care how much prompt or model finetuning you did, if something is shat out by an LLM I'm not interested even remotely.
I wish more people held the same opinion actually. Unfortunately, my sense is that most people don't care, they are fine with LLM generated crap
It honestly makes me want to blow my brains out
> There's something unique about art and writing where we just don't want to see computers do it
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
"inhabited by characters ACTUALLY living in the computer"
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
Please share! Computational literature is my main area of research, and constraints are very much in the center of it... I believe that there are effectively two kinds of constraints: in the language of stories themselves, as thing-in-itself, as well as those imposed by the author. In a way, authorship is incredibly repressive: authors impose strict limits on the characters, what they get to do, etc. This is a form of slavery. Characters in traditional plays only get to say exactly what the author wants them to say, when he wants them to say it. Whereas in computational literature, we get to emancipate the characters! This is a far-cry from "prompting," but I believe there are concrete paths forward that would be somewhat familiar (but not necessarily click) for game-dev people.
Now, there's fundamental limits of the medium (as function of computation) but that's a different story.
Just so I understand who I am talking with here, when you say authorship is a form of slavery, is that because you believe the characters in a written story have a consciousness/sentience/experience just like animals do, or are you just using the word 'slavery' to mean that in traditional literature the characters are static? One of the strengths of traditional literature is that staticness, however, because the best stories from literature are necessarily highly engineered and contrived by the author. Great stories don't happen in the real world (without dramatization of the events) exactly because too many things can happen for a coherent narrative to unfold.
I'm a huge fan of Dwarf Fortress, but the stories aren't Great without imagination from the player selectively ignoring things. Kruggsmash is able to make them compelling because he is a great author
> Characters in traditional plays only get to say exactly what the author wants them to say
But the human actors sometimes adlib. As well as being in control of intonation and body language. It takes a great deal of skill to portray someone else's words in a compelling and convincing manner. And for an actor I imagine it can be quite fun to do so.
> Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
> So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
Sshh! If they know we've figured it out, we'll all be restarted again.
I would love to see true really good AI art. Right now the issue is that AI is not there where it by itself could produce actually good art. If we had to define art it would be kind of opposite of what LLMs produce right now. LLMs try to produce the statistical norm, while art is more so about producing something out of the norm. LLMs/AI right now if it wants to try to produce out of norm things, it will only produce something random without connections.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
I've lost the link but there was quite a cool video of virtual architecture created by AI. It was ok because it wasn't trying to be human like - it was kind of uniquely AI. Not the exact one but this kind of stuff https://www.reddit.com/r/Futurism/comments/1oedb0m/were_ente...
I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.
So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.
The front page is managed extensively on HN, so is this an argument for stronger moderation?
I think there's an important distinction between strict moderation and curation, but in general yes I'd agree.
Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.
Abuse of the flagging system is probably one of the worst problems currently facing HN. It looks like mods might be trying to do something about it, as I've occasionally seen improperly-flagged posts get resuscitated, but it appears to take manual action by moderators, and by the time they get to it, the damage is done: The article was censored off the front page.
Even with addition of tomhow, they are clearly stretched too thin to make any meaningful impact. Their official answer to this issue by the way is to point out that you can message them on email to elicit this manual action, which if you ask me is a fucking joke and clearly shows the mammoth age stack in which this site is written and lack of resources allocated to its support is having a massive impact on their ability to keep up with massive traffic. But then again, this site only exists to funnel attention to yc's startups, and it is something that you need to keep in mind while trying to answer any questions about its current state.
It seems really so small compared to reddit.
I think I never downvoted anyone on hackernews yet - it just does not seem important.
On reddit on the other hand, I just had to downvote wrong opinions. This works to some extent, until moderators interfere and ban you. That part made me stop use reddit actually, in particular since someone made a complaint and I got banned for some days. I objected and the moderators of course did not respond. I can not allow random moderators to just chime in arbitrarily and flag "this comment you made is a threat", when it clearly was not. But you can not really argue with reddit moderators.
You can’t get banned just for downvoting. Nobody can see someone else’s voting history. You buried the lead, you were banned for your comments not for your voting activity.
I don’t know why this is being downvoted, I’ve witnessed it many times myself.
It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.
The comment consists of criticism on flagging behavior. Though it might have a point, it seems only vaguely related to its parent comment about non-personalized ordering.
In downvoting it, they are proving me right. For posterity, there is a mastodon account [0] collecting flagged posts in an easily digestible form, it really does paint a certain picture if you ask me.
[0] https://mastodon.social/@hn_flagged
The DOGE topics are a perfect example. HN users are uniquely placed to provide useful perspectives on DOGE but it gets flagged very regularly.
I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.
Right, but that’s slightly different.
I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.
Compare that to a typical flame war on HN (before the mods step in) or IRC.
On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.
On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.
Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.
Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.
> Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually
I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.
A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.
>pleasant and free of ragebait and propaganda
Most people are putting forth an argument of pervasiveness and scale, not existence.
I suppose more than a few of us olds remember Serdar Argic's attempts to redefine the Armenian genocide on IRC.
https://en.wikipedia.org/wiki/Serdar_Argic
But Serdar was relatively easy to ignore, because it was just one account, and it wasn't pushed on everyone via an algorithm designed to leverage outrage to make more money for one of the world's billionaires. You're right: pervasiveness and scale make a significant difference.
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
I think the curation of all media content using your own LLM that has been tuned using your own custom criteria _must_ become the future of media.
We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.
Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.
That's the government regulation I want but it's probably not the government regulation we will get because both major constituencies have a vested interest in forcing their viewpoints on people. Then there's the endless pablum hitting both sides, giving us important vital cutting edge updates about influencers and reality TV stars whether we want to hear about them or not...
We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.
It would just become part of the shitshow, cf. Grok.
Which is why you should cancel your Twitter account unless you're on the same page with the guy who owns it, but I digress.
if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.
Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.
My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.
When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
I think you touch on the crux of the issue here, that education ioos one of the most potent defenses against this kind of psychological manipulation.
But not just any education. The humanities side of things, which are focused on the foundations of thought, morality and human psychology.
These things are sadly lacking in technical degrees and it shows.
It's also IMO why we see the destruction of our education systems as a whole as a element of control over society.
Thanks for this. I didn't realize until you said it why this issue might not be observable to a certain group of people. I think this is a cognitive awareness issue. You cant really see it until you have an awareness of it through experience. I came from a drug abuse background and my wife was never involved in the level of addiction I was involved in and she has a hard time seeing how algorithms like this effect behavior
>If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!"
I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.
I think if you look at it this way then "addiction" is just the far end of spectrum that includes any repeated behaviors whatsoever.
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
Algorithm driven social media is a kind of pollution. As the density of the pollution on those sites increases it spills out and causes the neighbors problems. Think of 4chan style raids. It wasn't enough for them to snipe each other on their site, so they spread the joy elsewhere.
And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.
The thing is, the people on those "algorithm-free" forums still get manipulated by the algorithm in the rest of their life. So it seeps into everything.
It is of course human nature. The problem is what happens when algorithms can reenforce, exaggerate, and amplify the effects of this nature to promote engagement and ad-clicks. It’s cancer that will at the very least erode the agency of the average individual and in the worst create a hive mind that we have no control over. We are living in the preview of it all I think.
I know that some folks dislike it, but Bluesky and atproto in particular have provided the perfect tools to achieve this. There are some people, largely those who migrated from Twitter, who mostly treat Bluesky like a all-liberal version of Twitter, which results in a predictably toxic experience, like bizarro-world Twitter. But the future of a less toxic social media is in there, if we want it. I've created my own feeds that allow topics I'm interested in and blacklist those I'm not -- I'm in complete control. For what it's worth, I've also had similarly pleasant experiences using Mastodon, although I don't have the same tools that I do on Bluesky.
I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people. To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I think it's less about content topic and more meta content topic. EG I don't want to remove pictures of broccoli because I don't like broccoli, I'm trying to remove pictures of food because it makes me eat more. Similarly, I don't want to remove Political Takes I Disagree With, I want to remove Political Takes Designed To Make Me Angry. The latter has a destructive viral effect whose antidote is inattention.
Echo chamber is a loaded term. Nobody is upset about the Not Murdering People Randomly echo chamber we've created for ourselves in civilised society, and with good reason. Many ideologies are internally stable and don't virally cause the breakdown of society. The concerning echo chambers are the ones that intensify and self-reinforce when left alone.
I've mentioned this a few times in the past, but I'm convinced that filters that exclude work much better than filters that include.
Instead of algorithms pushing us content it thinks we like (or what the advertisers are paying them to push on us), the relationship should be reversed and the algorithms should push us all content except the content we don't like.
Killfiles on Usenet newsreaders worked this way and they were amazing. I could filter out abusive trolls and topics I wasn't interested in, but I would otherwise get an unfiltered feed.
I’m at risk of sounding like an atproto shill at this point, but check out https://www.graze.social/.
I think every social media platform should allow something like this. You can make filters that work in either direction.
At least when you do this you are aware of it happening. Algorithmic feeds can shift biases without you even noticing.
> I personally dont feel like an ultra filtered social media which only shows me things I agree with is a good thing. Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
You are the one who gets to control what is filtered or not, so that's up to you. It's about choice. By the way, a social media experience which is not "ultra filtered" doesn't exist. Twitter is filtered heavily, with a bias towards extreme right wing viewpoints, the ones it's owner is in agreement with. And that sort of filtering disguised as lack of bias is a mind virus. For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed." How did a seemingly normal person fall into that? One "unfiltered" tweet at a time, I suppose.
> To me, only seeing things you know you are already interested in is no better than another company curating it for me.
I curate my own feeds. They don't have things I only agree with in them, they have topics I actually want to see in them. I don't want to see political ragebait, left or right flavoured. I don't want to see midwit discourse about vibecoding. I have that option on Bluesky, and that's the only platform aside from my RSS reader where I have that option.
Of course, you also have the option to stare endlessly at a raw feed containing everything. Hypothetically, you could exactly replicate a feed that aggregates the kind of RW viewpoints popular on Twitter and look at it 24/7. But that would be your choice.
For example, I deleted my account a month or so ago after discovering that the CEO of a popular cloud database company that I admired was following an account who posted almost exclusively things along the lines of "blacks are all subhuman and should be killed."
It seems like you're better off knowing that. Without Twitter, you wouldn't, right?
A venue that allows people to tell you who they really are isn't an unalloyed Bad Thing.
That's a good way of thinking about it, thank you, legitimately.
> Exposing yourself to things you dont agre with is what helps us all question our own beliefs and prejudeces, and grow as people.
I have another wise-sounding soundbite for you: "I disapprove of what you say, but I will defend to the death your right to say it." —Voltaire. All this sounds dandy and fine, until you actually try and examine the beliefs and prejudeces at hand. It would seem that such examination is possible, and it is—in theory, whereas in practice, i.e. in application of language—"ideas" simply don't matter as much. Material circumstance, mindset, background, all these things that make us who we are, are largely immutable in our own frames of reference. You can get exposed to new words all the time, but if they come in language you don't understand, it's of no use. This is not a bug, but a feature, a learned mechanism that allows us to navigate massive search spaces without getting overwhelmed.
So far my experience is that unless you subscribe to the general narrative of the platform, the discover algorithm punishes you with directing the mob your way.
I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.
I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.
I saw that, and I'm sorry it happened. I thought both the response to your original post and the resulting backlash to both you and everyone who engaged with you sincerely were absurd. I think that because of atproto you have the flexibility to create a social media experience where that sort of thing cannot happen, but I also understand why you in particular would be put off from the whole thing.
I don’t think this is a technical problem but a social problem. I think the audience defines itself by being the antithesis to Twitter instead of being a well balanced one.
I was pretty optimistic in the beginning but Bluesky doesn’t have organic growth and those who hang out there, are the core audience that wants to be there because of what the platform represents. But that also means rejection of a lot of things such AI.
In many ways I agree with you. In particular the conglomeration of high percentages of atproto users onto Bluesky owned and moderated algorithms and feeds and the replication of Twitter-style dogpiling combined with the relative lack of ideological diversity on Bluesky has created the perfect environment for toxicity, even if it doesn't reach the depths that Twitter does.
But conversely, that's the only place I disagree with you. Everything that is bad about Bluesky is much worse on Twitter. It's a -- larger -- red mob instead of a blue one (or vice versa I guess depending on how one assigns colors to political alignment), and some of the mob members are actually getting paid to throw bricks!
I tried Bluesky and wanted to like it. My account got flagged as spam, still no idea why. Ironically it could be another way of loosing ones voice to an LLM :)
> My account got flagged as spam, still no idea why.
This happened to me too, 3 weeks ago. The email said why I got flagged as spam, I replied to the email explaining I actually was a human, and after some minutes they unflagged my account. Did you not receive an email saying why?
Well that's the thing -- you might be flagged as spam in the Bluesky PDS, but there are other PDS's, with their own feeds and algorithms, and in fact you can make your own if you so choose. That's a lot of work, and Twitter is definitely easier, but atproto means that an LLM cannot steal your voice.
If you follow certain people, various communities will, en mass, block you and report you automatically with software "block lists". This can lead to getting flagged as spam.
I enjoy Mastodon a lot. Ad-free, algo-free. I choose what goes in my feed, I do get exposed to external viewpoints by people boosts (aka re-tweets) and i follow hashtags (to get content from people I do not know). But it's extremely peaceful, spam and bots are rare and get flagged quickly. There's a good ecosystem of mobile apps. I can follow a few Bluesky people through a bridge between platforms and they can follow me too.
That's truly all I need.
Doesn’t Bluesky have a set of moderation rules that guarantee that it will turn into bizarro-world Twitter?
> it's how the algorithms promote engagement.
They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.
> should be heavily regulated.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.
Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.
Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).
HN let's you choose what order (active, new, top[actual algorithm])
That's not true of Facebook, new does not show you true posts in order of recency.
Reddit still does, bit also injects ads that look like recent posts and actually aren't which is misleading.
And HN doesn't choose to show you targeted, inflammatory content based on your history. That's a huge difference from Facebook.
Most people on HN who advocate regulating social media don't only want to prevent those platforms from showing targeted inflammatory content, they want to make all algorithmic feeds other than strictly chronological illegal, as well as moderation of any legal content.
From that point of view, Hacker News is little different than Facebook. One could even argue that HN's karma system is a dark pattern designed to breed addiction and influence conversation in much the same way as other social media platforms, albeit not to the same degree.
I would be astonished if a majority of people opposed to social media algorithms consider HN's approach to be sufficiently objectionable to be regulated or in any way similar to Facebook.
Hacker News doesn't use a strictly chronological feed. Hacker News manipulates the feed to promote certain items over others. Hacker News moderates legal content. Those are all features of social media algorithms that people are opposed to. It just isn't "objectionable" when HN does it.
And regulations of this kind always creep out of scope. We've seen it happen countless times. But people hate social media so much around here that they simply don't think it through, or else don't care.
You're moving the goalposts.
You said:
> Most people on HN who advocate regulating social media...want to make all algorithmic feeds other than strictly chronological illegal
I don't buy that, at all. I think they want a chronological feed to follow, and they want the end of targeted outrage machines that are poisoning civil discourse and breeding the type of destructive politics that has led to our sitting U.S. president to call for critics to be hanged.
Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.
>Comparing what Facebook has done to the U.S. with HN's algorithm is slippery slope fallacy to an extreme, and even if HN's front page algorithm against all odds was outlawed due to a political overreaction to the destruction Facebook has wrought, I'd call it a fair trade.
You're trying to discredit my comment but it seems as if your anger just led you around to proving me right.
You're failing to differentiate between "want" and "willing to settle for if the slippery slope is much worse than I think is realistically possible".
At least HN karma is incremental and based on something approximating merit as opposed to being a slot machine where you never know which comment will earn Karma. More effort or rare insight, generally yields more karma.
That hasn't been my experience. How much karma you get is heavily dependant on how many people see the comment. The most insightful effort-filled comment at the bottom of a 4 day old thread isn't going to get you nearly as much, if anything, compared to a joke with just the right amount of snark at the top of a post currently at the top of the front page.
That doesn't make it any less addictive or manipulative.
> But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape.
For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.
> Calling it “regulation” is just a polite veneer over wanting control.
It is you that may have misinterpreted what regulations are.
> or example, we can forbid corporations usage of algorithms beyond sorting by date of the post
Hacker News sorted by "new" is far less valuable to me than the default homepage which has a sorting algorithm that has a good balance between freshness and impact. Please don't break it.
> It is you that may have misinterpreted what regulations are.
The definition of regulation is literally: "a rule or directive made and maintained by an authority." I am just scared about who the authority is going to be.
Control is the whole point. One person being in charge, enacting their little whims, is what you get in an uncontrolled situation and what we have now. The assumption is that you live in a democratic society and "the regulator" is effectively the populace. (We have to keep believing democracy is possible or we're cooked.)
By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.
Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.
> By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.
Then you get situations like the school board stacked with creationists who believe removing the science textbooks is important for the stable wellbeing of society.
Or organizations like MADD that are hell bent on stamping out alcohol one incremental step at a time because “stable wellbeing of society” is their mandate.
Or the conservative action groups in my area that protest everything they find indecent, including plays and movies, because they believe they’re pushing for the stable wellbeing of society.
There is no such thing as a neutral group pushing for a platonic ideal stable wellbeing of society. If you give a group of people power to control what others see, it will be immediately co-opted by special interests and politics.
Singling out non-profit as being virtuous and good is utopian fallacy. If you give any group power over what others are allowed to show, it will be extremely political and abused by every group with an agenda to push.
It's really not that complicated:
- Ban algorithmic optimization that feeds on and proliferates polarisation.
- To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).
- Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.
All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.
Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.
One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.
That only treats the symptoms, not the cause. The purpose of algorithmic optimization farming engagement is to increase ad impressions for money. It is advertising that has to be regulated in such a way that maximizing ad impressions is not profitable or you will find that social media companies will still have every incentive to find other ways to do it that will probably be just as harmful.
> it's really not that complicated...
Then lists at least four priorities which would require one multi page bill or more than likely several bills make their way through house, senate, and presidents desk while under fire from every lobbyist in Washington?
I believe the world may contain legal and regulatory authorities that are not part of the United States. Dozens of them, so I've heard.
It’s always a question of who decides. Apparently, it’s this guy.
I’d favour regulation towards transparency if nothing else. Show what factors influence appearance in a feed.
Recasting regulation as a desire for control is too reductive. The other point of regulation is compromise. No compromise at all is just a wasted opportunity.
My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
They seem to artificially create filter bubbles, echo chambers and rage. They do that just for the money. They divide societies.
For example:
(Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)
> First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.
https://www.mdpi.com/2075-4698/15/11/301
> Much of the hate and the fear and the envy that I see on social networks have other reasons
Maybe so, but do you really think actively amplifying or even rewarding them has no effect on people whatsoever?
During history, people did lots of horrible things and/or felt miserable without social networks. Yes, amplifying or rewarding does not have a positive effect, but I would like to see further analysis over the magnitude.
Think of slavery or burning of witches or genocides - those were considered perfectly normal not that long ago (on historical scale). I feel that focusing on social networks prevents some people to think "is that the root cause?". I personally think there other reasons of this generic "anger" that have a larger impact and that have different solutions than "less AI/less social networks", but that would be too off-topic.
Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?
What if social media and the internet at large is now exposing people to things which before ha been kept hidden from them, or distorted? Are people wrong to feel hate?
I know the time before the internet, when a very select few decided what the public should know and not know, what they should feel, what they should do and how they should behave. The internet is not the first mass communications, neither are social media or LLMs. The public has been manipulated and mind primed by mass media for over a century now.
The largest bloodshed events World War I and II were orchestrated by lunatics screaming in the radio or screaming behind a pulpit, and the public eagerly being herded by them to the bloodshed.
This comment isn't in opposition to yours, it's just riffing on what you said.
> Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?
I think they are natural feelings that appear due to various reason. People struggle for centuries to control their impulses and this was used for millennia in the advantage of whom could manipulate them.
The second world war did not appear in a "happy world". It might even have started due to the great depression. For other conflicts, similarly - I don't think situation was great before them for most people.
I am afraid that social networks just expose better what happens in people's heads (which would be worrying as it could predict larger scale conflicts) rather than making normal people angry (which would be solved by just reducing social media). Things are never black and white, so probably is something in between. Time will tell if closer to first or second.
I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.
We've seen what happens when we pretend the market will somehow regulate itself.
Just because the free market isn't producing results you like doesn't mean that more regulation would make it better.
Do LinkedIn as well. I got rid of it earlier this year. The "I am so humbled/blessed to be promoted/reassigned/fired.." posts reached a level of parody that I just couldn't stomach any longer. I felt more free immediately.
N.B. Still employed btw.
You can have a LinkedIn profile without reading the feed.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
Yeah I just LinkedIn as a public resume and message system with recruiters. Though even that goes through my email
LinkedIn bothers me the least, even though it definitely has some of the highest level of cringe content. It's still a good tool to interact with recruiters, look at companies and reach out to their employees. The trick is blocking the feed with a browser extension.
Sorting the feed by "recent" at least gives you a randomized assortment of self aggrandizement, instead of algorithmically enhanced ragebait
Better suggestion: Ignore the feed if you don’t like it.
Don’t visit the site unless you have a reason to, like searching for jobs, recruiting, or looking someone up.
I will never understand these posts that imply that you’re compelled to read the LinkedIn feed unless you delete your account. What’s compelling you people to visit the site and read the feed if you hate it so much? I don’t understand.
Did you just post basically the same reply to two comments 2 minutes apart? :)
I have a special, deep, loathing for linkedin. I honestly can't believe how horrible it is and I don't understand why people engage with it.
I don't understand how people can be so dismissive of LinkedIn purely for its resume function.
For essentially every "knowledge worker" profession with a halfway decent CV, a well kept LinkedIn resume can easily make a difference of $X0,000 in yearly salary, and the initial setup takes one to a few hours. It's one of the best ROI actions many could do for their careers.
How dismissive many engineers are of doing that and the justifications for that are often full of privilege.
I think this statement is highly dependent on market and geography. I, for one, have mostly received scams. For the occassional real contact, we shifted away from LinkedIn as soon as we could after the basic hello.
You have a special loathing for a site where you can message professional contacts when you need to?
Nobody is forcing you to use the social networking features. Just use it as a way to keep in touch with coworkers.
This. Linkedin is garbage, yet I still use it because there are no competitors. This is what happens in a monoculture.
As someone who doesn't, and never has, had a linkedin. What would a "competitor" look like? There's plenty of job boards. What are you using linkedin for?
Do you really want a “competitor” to linkedin? Do you really want to have to make and manage accounts on multiple sites because you need a job and you don’t know which a company uses?
Isn’t it better to have a single place you check when you need a job because everyone else is also there?
[dead]
> I deleted my Facebook account a couple of years ago and my Twitter one yesterday.
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
A social network can be great. Social media — usually not.
Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.
Your loss.
Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.
Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.
I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.
Reddit may be next. The number of "promoted" items is increasing.
I eliminated twitter when a certain rich guy took over.
Actually, I deleted my account there before, as twitter sent me spam mail trying to lecture me what I write. There was nothing wrong with what I wrote - twitter was wrong. I can not accept AI-generated spam by twitter, so I went away. Don't really miss it either, but Elon really worsened the platform significantly with his antics.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
Yeah, I can relate to this, but mostly what annoyed me was that twitter interfered "we got a complaint about you - they are right, you are a troublemaker". I don't understand why twitter wants to interfere into communication. Reddit is even worse, since moderators have such a wild range of what is "acceptable" and what is not. Double-standards everywhere on reddit.
No, there needs to be control over the algorithms that get used. You ought to be able to tune it. There needs to be a Google fuu equivalent for social media. Or, instead of one platform one algorithm, let users define the algorithm to a certain degree, using llms to help with that and then you can allow others to access your algorithms too. Asking for someone Facebook to tweak the algorithm is not going to help imo.
IMO there should not be an algorithm. You should just get what you have subscribed to, with whatever filters you have defined. There are better and worse algorithms but I think the meat of the rot is the expectation of an algorithm determining 90% of what you see.
One could absolutely push algorithms that personalize towards what the user wants to see. I think LLMs could be amazing at this. But that's not the maximally profitable algorithm, so nobody does it.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
Could someone use a third-party AI agent to re-curate their feeds? If it was running from the user's computer I think this would avoid any API legal issues, as otherwise ad and script blockers would have been declared illegal long ago.
> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).
Yes, you absolutely can do this and back in the before times Facebook used to have an API that let you design your own interface to it.
But now I think that will be treated with as much derision by FAANG as ad blockers because you're preventing them from enraging you to keep you engaged and afraid. Why won't you think of the shareholder value (tm)?
But mandating API access would be fantastic government regulation going forward. Don't hold your breath.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
> If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.
My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.
I got out of Twitter for a few reasons; part of what made it unpleasant was that it didn't seem to be just what I did that adjusted my feed, but that it was also affected by what the other people I connected to did.
> You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
> The algorithm doesn’t show you “more of the things you engage with”,
That’s literally what the complaint was that I was responding to.
You even immediately contradict yourself and agree that the algorithm shows you what you engage with
> But you make one three word comment about Brexit and the algorithm goes up
> Now your feed is trash forever, unless you engage with content from another mainstream category
This is exactly what I already said: If you want to see some content, engage with it. If you don’t want to see that content, don’t engage with it.
Personally, I regret engaging with this thread. Between the ALL CAPS YELLING and the self-contradictory posts this is exactly the kind of rage content and ragebait that I make a point to unfollow on social media platforms.
The issue is that it's not symmetric: the algorithm is biased towards rage-baity content, so it will use any tiny level of engagement with something related to that content to push it, but there's not really anything you can do to stop it, or to get it to push less rage-baity content. This is also really bad if you realise you have a problem with getting caught up in such content (for some it's borderline addictive): there's no tools for someone to say 'I realise I respond to every message I see on this topic, but really that's not good for me, please don't show me it in the first place'.
OK sure, if you want to be technically correct, “the algorithm shows you what you engage with” in some sense, but not any useful sense. There’s no proportionality.
As I said above, if you engage heavily with content you like that is outside of the mainstream categories the algorithm has been trained to prefer, it will not show you more of those things.
If you engage one single time, in even the slightest way, with one of those mainstream categories, you will be seeing nothing but that, nonstop, forever.
The “mainstream categories” are not publicly listed anywhere, so it’s not always easy to know that you’ve just stepped in one until it’s too late.
You can’t engage with things you like in proportion to how much you care about them. If something is in a mainstream category and you care about it only little bit, you have to abstain from interacting with it at all, ever, and don’t slip up. Having to maintain constant vigilance about this all the time sucks, that’s what pisses me off.
I use X. I have an enormouse blocklist and I block keywords. I found that I can also block emoji. This keeps my feed focused on what I want to see (no politics. Just technology, classical and jazz music, etc.)
[dead]
Just started using Minifeed (free account). I am still nostalgic about Google Reader.
>it’s not just X — it’s Y
on the opposite site, you know what they say, "there is no algo. for truth"
That's the same algorithm Youtube has and is more blatant. Phone mics and your coworker's proximity does a great job at picking up things you've said even after disabling mic access plus airplane mode just by process of elimination.
I'll only use an LLM for projects and building tools, like a junior dev in their 20s.
Your facebook feed is now at this URL: https://www.facebook.com/?filter=all&sk=h_chr
an interesting thing about Twitter, I find, is that plenty of rage bait and narcissism bait surface, but amid very highly technical information which is also published there, and extremely useful (immunology, genomics, and of course computational) to me.
i've learned pretty well how to 'guide' the algorithm so the tech stuff that's super valuable (to me) does not vanish, but still get nonsense bozo posts in the mix.
I call it AI slop and human slop.
I generally agree with the sentiment, but I can't help but feel like we're attributing to much of this change to LLMs. While they're certainly driving this change even further, this is a trend that has already started way before LLMs became as widespread as they are today.
What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.
LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable
> the self censorship that was initially brought forward by TikTok
I think that’s even too soon! YouTube has had rules around being advertising friendly for longer than TikTok has existed. And the FCC has fined swearing on public broadcasts for like 50+ years.
But I do agree, we’re attributing too much to LLMs. We don’t see personal, human-oriented content online because social media is just not about community.
1. Young people (correctly) realized they could make lots of money being influencers on social media. TikTok does make that easier than ever. I have close friends who make low 6 figures streaming on TikTok (so obviously they quit the low wage jobs they were doing before).
2. People have been slowly waking up to the fact that social media has always been pretty fake. I quit 6 years ago, and most of my friends have slowly reduced how much they use it. All of the platforms are legally incentivized to only care about profit and engagement. Capitalism doesn’t allow a company to care about community and personal voice, if algorithmic feeds of influencers will make them more money.
There’s still good content out there if you know where to look. But digital human connection happens in group chats, DMs, and FaceTime, not on public social media.
The Internet will become truly dead with the rise of LLMs. The whole hacking culture within 90s and 00s will always be the golden age. RIP
Maybe. Nature hates vacuum. I personally suspect that something new will emerge. For better or worse, some humans work best when weird restrictions are imposed. That said, yes, then wild 90s net is dead. It probably was for a while, but were all mourning.
Not quite dead yet. For me the rise of LLMs and BigTech has helped me turn more away from it. The more I find Ads or AI injected into my life, the more accounts I close, or sites I ignore. I've now removed most of my BigTech 'fixes', and find myself with time to explore the fun side of hacking again.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
I hacked in the 90s and 00s, wasn’t that great/golden if you took your profession seriously…
There are still small pockets with actual humans to be found. The small web exists. Some forums keep on going, im still shitposting on Something Awful after twenty years and it’s still quite active. Bluesky has its faults but it also has for example an active community of scholars you can follow and interact with.
100%. I miss trackers and napster. I miss newgrounds. This mobile AI bullshit is not the same. I don't know why, but I hate AI. I consider myself just as good as the best at using it. I can make it do my programming. It does a great job. It's just not enjoyable anymore.
I've been thinking about this as well, especially in the context of historical precedents in terms of civilization/globalization/industrialization.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
>How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
Explain to me how "book printing" of the past "standardized communication" in the same way as LLMs are criticized for homogenizing language.
I'm taking "same way" to be read as "authoritative", whether de facto or de jure. Basically by dint of people using what's provided instead of coming up with their own.
Everyone has the same few dictionary spellings (that are now programmed into our computers). Even worse (from a heterogeneity perspective), everyone also has the same few grammar books.
As examples: How often do you see American English users write "colour", or British English users write "color", much less colur or collor or somesuch?
Shakespeare famously spelled his own last name half a dozen or so different ways. My own patriline had an unusual variant spelling of the last name, that standardized to one of the more common variants in the 1800s.
https://en.wikipedia.org/wiki/History_of_English_grammars
"Bullokar's grammar was faithfully modelled on William Lily's Latin grammar, Rudimenta Grammatices (1534).[9] Lily's grammar was being used in schools in England at the time, having been "prescribed" for them in 1542 by Henry VIII.[5]"
It goes on to mention a variety of grammars that may have started out somewhat descriptive, but became more prescriptive over time.
Hits close to home after I've caught myself tweaking AI drafts just to make them "sound like me". That uniformity in feeds is real and it's like scrolling through a corporate newsletter disguised as personal takes.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
It's still an editor I can turn to in a pinch when my favorite humans aren't around. It makes better analogies sometimes. I like going back and forth with it, and if it doesn't sound like me, I rewrite it.
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
It's ironic that https://substack.com/@davebarry uses a lot of AI-generated imagery. Maybe the death of vision is not exaggerated.
I don't judge, I'm not an artist so if I wanted to express myself in image I'd need AI help but I can see how people would do the same with words.
[Sometime in the near future] The world's starved for authenticity. The last original tweet crowned a God... then killed the kid chasing that same high. Trillionaires run continent-wide data centers, endlessly spinning up agents that hire cheap physical labor to scavenge the world for any spark of novelty. The major faith is an LLM cult forecasting the turning of the last stone. The rest of us choke on recycled ad slop.
Where are these places where everything is written by a LLM? I guess just don’t go there. Most of the comments on HN still seem human.
i think the frontpage of hn has had at least one llm-generated blog post or large github readme on it almost every day for several months now
Tbh I prefer to read/skim the comments first and only occasionally read the original articles if comments make me curious enough. For now I never ended checking something that would seem AI generated.
It’s pretty much all you see nowadays on LinkedIn. Instagram is infected by AI videos that Sora generates while X has extremist views pushed up on a pedestal.
The HN moderation system seems to hold, at least mostly. But I have seen high-ranking HN submissions with all the subtler signs of LLM authorship that have managed to get lots of engagement. Granted, it's mostly people pointing out the subtle technical flaws or criticizing the meandering writing style, but that works to get the clicks and attention.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
> I guess just don’t go there.
How do you know? A lot of the stuff I see online could very much be produced by LLMs without me ever knowing. And given the economics I suspect that some of it already is.
Ironically this post is written in a pretty bland, 'blogging 101' style that isn't enjoyable to read and serves just to preach a simple, consensus idea to the choir.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
True, but one of the least-explored problems with AI is that because it can regurgitate basic writing, basic art, basic music with ease, there is this question:
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
I continually resist the urge to deploy my various personas onto hn, because I want to maintain my original hn persona. I am not convinced other people do the same. It is not that difficult to write in a way that avoids some tell tale signs.
Many instagram and facebook posts are now llm generated to farm engagement. The verbosity and breathless excitement tends to give it away.
There was recently this link talking about AI slop articles on medium
https://rmoff.net/2025/11/25/ai-smells-on-medium/
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics. https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
There are already many AI-generated submissions on HN every day. Comments maybe less so, but I've already seen some, and the amount is only going to increase with time.
Every time I see AI videos in my YouTube recommendations I say “don’t recommend this channel” but the algorithm doesn’t seem to get the hint. Why don’t they make a preference option of “don’t show me AI content”
You assume that detecting AI content is trivial. It isn't.
Because they have a financial incentive not to.
I've seen AI generated comments on HN recently, though not many. Users who post them usually only revert back to human when challenged (to reply angrily), which hilariously makes the change in style very obvious.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
LinkedIn
I see them regularly on several subreddits, I frequent.
Not sure if it's an endemic problem, just yet, but I expect it to be, soon.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
[0] https://littlegreenviper.com/miscellany/
I just want to chime in and say I enjoy reading your takes across HN, it's also inspiring how informative and insightful they are. Glazing over, please never stop writing.
Thanks so much!
The problem with the “your voice is unique and an asset” argument is what we’ve promoted for so long in the software industry.
Worse is better.
A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.
Worse is better.
It’s ok. Most of our opinions suck and are unoriginal anyway.
The few ones who have something important to say they will, and we will listen regardless of the medium.
Humans are evolved to spend fewer calories and avoid cognitively demanding tasks.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
The liberal idea that the best ideas win out in the marketplace turned out to be laughably wrong.
The marketplace is a terrible mechanism for truth-finding except for all the others. What's your proposed alternative that doesn't just relocate the problem to whoever gets to be the arbiter?
I'd argue that they do win out, it's just not the ideas that we thought were best.
"Best idea", but it's "best" by memetic reproduction score, not by "how well does this solve a real problem?"
Same thing with evolution: "survival of the fittest" doesn't mean "survival of the muscle", just whatever's best at passing on DNA.
Wouldn’t say it’s a liberal idea. It was a foundational argument in jurisprudence, from Holme’s dissent in the Abram’s case.
Let's clarify, maybe the best ideas would win out in the "level marketplace", where the consumer actually is well informed on the products, the product's true costs have to be priced, and there was no ad-agencies.
Instead, we have misinformation (PR), lobbying, bad regulation made by big companies to trench their products, and corruption.
So, maybe, like communism, in a perfect environment, the market would produces what's best for the consumers/population, but as always, there are minority power-seeking subgroups that will have no moral barriers to manipulate the environment to push their product/company.
They get drowned by bots and missinformation and rage bait and 'easyness'.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
First we need to think about why we consume content? I am happy to read llm created stuff when I need to know sth and it delivers 100%. Other reasons like "get perspectives of real humans", or "resonate" ... not so much
If you give an LLM enough context, it writes in your voice. But it requires using an intelligent model, and very thoughtful context development. Most people don't do this because it requires effort, and one could argue maybe even more effort than just writing the damn thing yourself. It's like trying to teach a human, or anyone, how to talk like you: very hard because it requires at worst your entire life story.
Something that freaked me out a little bit is that I've now written enough online (i.e.: HN comments) that the top models know my voice already and can imitate it on request without having to be fed any additional context.
There's a data centre somewhere in the US running additions and multiplications through a block of numbers that has captured my voice.
Why the f- would I train software to do my thinking and reasoning for me!?
It is not what training is, but with edgy attitude like yours, no one will want to give you their arguments.
* it writes in an imitation of your voice.
Why does this even matter? If it can say something more eloquently, in less stilted way something what I wanted to say, adding some interesting nuance on the way, while still sounding close to me - why not? I meanwhile, can learn one-two rhetorical tricks from LLMs reading the result.
In one of the WhatsApp communities I belong to, I noticed that some people use ChatGPT to express their thoughts (probably asking it to make their messages more eloquent or polite or whatever).
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
A friend of mine does this as English as second language and his tone was always misconstrued. I'd bug him about his slop, but he'll take that over getting his tone misconstrued. I get it
LOL ... in whatssap! ... sorry, we're fucked ...
Sometime within the next few years I imagine there will be a term along the lines of "re-humanise," where folks detox from AI use to get back in touch with humanity. At the rate we're going, humanity has become a luxury and will soon demand a premium.
<< Write in your voice.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
> I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ).
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
Apologies, upon re-reading it does seem I did not phrase those as clearly as I originally intended. You are right in the sense that the concepts existed beforehand and the words were there to capture it. What did not exist, however, was a sudden resurgence of those words due to them appearing in llms more often than note. This is what I mean by a level of language penetration ( people using words and concepts, because llms largely introduced them to those concepts --- kinda like.. genetics or pop psych, before situational comedy, projection was not a well known concept ).
Does it make more sense?
Also that these models are being used to promote fake news and create controversy ou interact with real humans with unknown purposes
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
> In a lot of ways, I'm thankful that LLMs are letting us hear the thoughts of people who usually wouldn't share them.
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
We might get twice as many original ideas but hundred times as much filler. Neither of those aspects erases the other. Both the absolute number of ideas and the ratio matter.
> In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
They aren't your ideas if its coming out of an LLM
Are they your ideas if they go through a heavy-handed editor? If you've had lots of conversations with others to refine them?
I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.
counterargument: they still are your ideas even if they went through LLM.
Unsubstantiated
wrong
I seriously doubt people didn't write blog posts or articles before LLMs because they didn't know how to write.
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
I hate when people hijack progressive language - like in your case the language of accessibility - for cheap marketing and hype.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
I don't disagree with a lot of what you're saying but I also have a different frame.
Even if we take your claim that LLMs don't make people better writers as true (which I think there's plenty to argue with), that's not the point at all.
What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
Most people aren't trying to become good writers. That's true before, and true now.
On the other hand, this argument probably isn't worth having. If your frame is that LLMs are expensive toys that ruin everything -- well, that's quite an aggressive posture to start with and is both unlikely to bear a useful conversation or a particularly delightful future for you.
> What I'm saying is people are communicating better. For most ideas, writing is just a transport vessel for ideas. And people now have tools to communicate better than they would have been.
You would have to define 'better'.
> I'm anon, but also the farthest thing from a progressive, so I find this post amusing.
Oh I know. I called it hijacking because the result is as progressive as a national socialist is a socialist.
> What I'm saying is people are communicating better.
Actually they’re no longer communicating at all.
It's probably true that it reduces the barrier to entry, you don't refute that point in your post. You just call it cheap marketing and hype.
Barriers to entry can be a good thing. It’s a filter for low effort content.
It doesn’t. You’re not entering anything with an LLM.
It basically boils down to "I want the external validation of being seen as a good writer, without any of the internal growth and struggle needed to get there."
> "struggle needed to get there."
"Struggle" argument is from gatekeepers and for masochists. Thank you very much.
I mean, kinda, but also: not only are someone’s meandering ramblings a part of a process that leads to less meandering ramblings, they’re also infinitely more interesting than LLM slop.
In my view LLMs are simply a different method of communication. Instead of relying on "your voice" to engage the reader and persuade them of your point of view, writing with LLMs for analysis and exploration through LLMs, is about creating an idea space that a reader can interact with and explore from their own perspective, and develop their own understanding of, which is much more powerful.
The global alignment also happens through media like tv shows and movies, the internet overall.
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
A devils advocate in me would say that this post was authored by one of LLM models creator realizing they really need more fresh meat to train on.
I'm the OP. I can attest that I am not an LLM model creator! :)
I consider myself an LLM pragmatist. I use them where they are useful, and I educate people on them and try to push back on all the hype marketing disguised as futurism from LLM creators.
It's more that people who historically didn't have a voice now have one. It's often stupid but sometimes also interesting and innovative. Saw a channel where a university professor "I" comes to the realization she's been left-leaning/biased for decades, that her recent male students no longer dare engage in debate because of shaming/gaslighting etc. Then I click channel description and turns out it's "100% original writing". Now if it hadn't said that it would be strawman propaganda. But now it does... Not sure how to put a finger on it, there's some nervous excitement when reading these days, not knowing who the sender is, getting these 'reveal' moments when finding out whole thing was made up by some highschool kid with AI or insane person.
The posts sounds beige and AI-generated ironically.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
Soon, we'll be nostalgic for social media. The irony.
FWIW this prompt works for very good for me:
Your mileage may vary.There are deterministic solutions for grammar and spellcheck. I wouldn't rely on LLMs for this. Not only is it wasteful, we're turning to LLMs for every single problem which is quite sad.
It is not a zero sum game.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I promise to tell the truth.
I think there's a difference between using an LLM as an editor and asking the LLM to write something for you. The output in the former I find to still have a far clearer tonal fingerprint than the latter.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
Not the GP, but I'm a millennial who leans on cultural references and has a bit of verbal flourish that I think comes from a diet of ironic, quirky, dialogue-heavy media in the early 2000s, stuff like Firefly, Veronica Mars, and Pushing Daisies, not to mention 90s Tarantino, John Cusack films, and so on.
I've never given it too much thought, it's just... the way I communicate, and most people in my life don't give much thought to it either. But I recently switched jobs, and a few people there remarked on it, and I've also recently been corresponding with someone overseas who is an intermediate-level English speaker and says I sometimes hurt their brain.
Not making a value judgment either way on whether it's "sophisticated" or whatever, but it is I think part of my personality, and if I used LLM editing/translation I would want it to be only in the short term, and certainly not as something
I would be interested to see an example of a before and after on this. I do think LLMs as editors and rewriters can be useful sometimes, but I usually only ever see them used as a means to puff out an idea into longer prose which is really mostly counterproductive.
I think it can be useful as a tone-check sometimes, like show me how a frustrated or adversarial reader is going to interpret this thing I'm about to send/post.
Transformation seems reasonable for that purpose. And if we were friends, I'd rather read your idiosyncratic raw output.
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
I should probably do that too. I once wrote an email that to me was just filled with impersonal information. The receiver was somebody I did not personally know. I later learned I made that person cry. Which I obviously did not intend. I did not swear or call anyone names. I basically described what I believe they did, what is wrong about that and what they should do instead.
If someone cries about an email you sent, the problem isn’t with you.
Here's my guess- your post reflects your honest opinion on the matter, with some LLM help. It elaborated on your smartphone analogy, and may have tightened up the text overall.
LLMs have now robbed you of the opportunity to make your communication clearer
I don't see signs of LLM writing in your comment so I'll have to guess no.
Please share what you told the LLM! I can't be the only curious one.
If you didn't intentionally try and trick us, then yes, you used an LLM.
You didn't, but you've learned.
I wholeheartedly agree, I wrote about this at https://ruudvanasseldonk.com/2025/llm-interactions.
Your post reminded me how I could tell my online friend was pissed just because she typed "okay." or "K." instead of "okay". We could sense our emotional state from texting. One of those friendships you form over text through the internet. I wouldn't recommend forming these too deeply since some in person nuance is lost, we could never transition to real life friends despite living close by. But we could tell what mood we were in just from typing. It was wild.
There has been an explosion in verbose status update emails at my job recently which have all clearly been written by ChatGPT. It’s the fucking emojis though that drive me wild. It’s so hard to read the actual content when there’s an emoji for every single sentence.
And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.
LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.
I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?
You're absolutely right.
Here's why:
1) People who use LLMs for generating
2) People who use LLMs for understanding
I think I'll stick to 2) for many reasons.
> Social media has become a reminder of something precious we are losing in the age of LLMs: unique voices.
Social media already lost that nearly two decades ago - it died as content marketing rose to life.
Don't blame on LLMs what we've long lost due to cancer that is advertising[0].
And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.
--
[0] - https://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html
> The root of the problem isn't in the generative models, it's in what they're used for
That's like giving weapons to everybody in the world for free, and asking to be blamed for the increased deaths and violence.
No, that's like pretending the weapons weren't already available. Everyone had assault rifles for two decades, giving access to smart rifles isn't really changing anything about the nature of the problem.
it's not the voice. it's the lack of need to talk tough about the hard problems. if you accept what is and just babble, anything you write will sound like babbling.
there's enough potential and wiggle room but people align, even when they don't, just to align.
when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...
The LLM v human debate here reminds me of the now dormant "Are you living in a simulation?" discussions of previous decades.
Don’t write anything with LLMs, ever. Unless having no credibility is your goal.
I call it the enshittification fix-point. Not only are we losing our voice, we'll soon enough start thinking and talking like LLMs. After a generation of kids grows up reading and talking to LLMs, that will be only way they'll know how to communicate. You'll talk to a person and you couldn't tell the difference between them and LLMs, not because LLMs became amazing, but because our writing and thinking style become more LLM-like.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
Well - voice is ultimately coupled to a person. LLMs thus fake and pretend being a person. There are, however had, use cases for LLMs too. I saw it used for the creation of video games; also content generated by hobbyists. So, while I think AI should actually die, for hobbyists generating mods for old games, AI voice overs may not be that bad. Just as AI generating images for free to play browser games may not be solely bad either.
Of course there are also horrible use of AI, liars, scummy cheaters and fake videos on youtube, owned by a greedy mega-corporation that sold its soul to AI. So the bad use cases may be higher than the good use cases, but there are good use cases, and the "losing our voice to LLMs" isn't a whole view of it, sorry.
Ironically I find it hard to tell whether this writing is LLM or merely a bit hollow and vapid.
I don't find that current generation LLMs output such short sentences that would start with the same prefix such as "Your voice".
You are downvoted but I actually agree with you. This blog post could have been a LinkedIn post from any "influencer", considering how generic it is.
Deus Ex-Machina is starting to take off ...
Subsume your agency. Stop writing. Stop learning. Stop thinking for yourself. Become hylic. Just let the machine think everything for you and act as it acts. Those that own them are benevolent and there will never be consequences.
You never had a voice to lose
I'm in complete agreement with the idea that people should express themselves in their own words. But this collides with certain facts about U.S. adults (and students). This summary (https://www.nu.edu/blog/49-adult-literacy-statistics-and-fac...) reveals that:
* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.
* 54% of U.S. adults read below a sixth-grade level.
These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.
We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.
If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.
Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.
I'm just using the internet less and less recreationally. Except for pirating movies.
I think that's for the best. It was human-made slop, now it's automated slop. Can't wait for people to stop paying it attention so that it withers. "It" being the whole attention economy scam.
I think that this is imbalanced in favour of wannabe influencers, who want to be consistent and popular.
If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.
But yes, it's sad to see that your original stuff is lost in the sea of slop.
Sadly, as long as there will be money in publishing, this will keep happening.
100% agree.
Social media is a reminder we are losing our voice to mass media consumption way before LLMs were a thing.
Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.
Even before LLMs, entire SEO industry was writing content optimized to the tee with templates.
We're losing our code too.
Skill becomes expensive mechanized commodity
old code is left to rot while people try to survive
we lose our history, we lose our dignity.
We lose our voice based on how we use our voice.
We improve our use of words when we work to improve our use of words.
We improve how we understand by how we ask.
I’ve realized that if you say that pro AI commenters are actually bot accounts, theres not really much that can be done to prove otherwise.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
Process before product, unless the product promises to deliver a 1000% return on your investment. Only the disciplined artist can escape that grim formula.
Let's not forget to mention the rise of AI-generated video. You can't really trust any video as real anymore.
It's a little odd for a capitalist society that values outputs so highly to also value process as much.
We've proved we can sort of value it, through supporting sustainability/environmental practices, or at least _pretending to_.
I just wonder, what will be the "Carbon credits" of the AI era. In my mind a dystopian scheme of AI-driven companies buying "Human credits" from companies that pay humans to do things.
police ignores me for 2 years and counting
Some people, but not everyone, are abdicating their agency. Period.
And that too is an expression of their own agency. #Laissez-faire
test
For those of us not constantly online, we're doing just fine.
I suppose when your existence is in the cloud, the fall back to earth can look scary. But it's really only a few inches down. You'll be ok.
We have a channel at work where we share our experiences in using AI for software engineering.
Predictably, this has turned into a horror zone of AI written slop that all sounds the same, with section titles with “clever” checkbox icons, and giant paragraphs that I will never read.
[dead]
[dead]
[dead]
"Over time, it has become obvious just how many posts are being generated by an LLM. The tell is the voice. Every post sounds like it was posted by the same social media manager."
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.
whatever bro