Rendered at 19:59:17 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
ConceitedCode 2 days ago [-]
I suspect we'll address this by just going back to older ranking algorithms for search. We'll go back to the primary signal of good content being links from trusted sources.
People gaming the content based algorithms will eventually cause their own downfall.
iuvcaw 2 days ago [-]
Ironically this post is doing wonders for its page rank, as people are linking to it in the comments
Now that we have better ML, maybe we could take "link sentiment" into account too.
oliveroot 2 days ago [-]
I think they have something better - “link rank” which essentially takes into account the quality of backlink.
I believe it is nuanced enough to have different rank per “topic”, or “keyword” etc. but admittedly just kinda guessing from the outside.
The last time I tried to build something like this I realized it’s useless without first having a gigantic amount of data already crawled. When I started crawling I realized I would never catch Google. I think without Wikipedia the LLMs might have taken 10 more years to surpass them.
cyanydeez 2 days ago [-]
Crawlers would need to use backlinks but also rank vector similarity to ensure the linked content matches the linked intent. Some kind of rainbow shades of how relevent the link is to the linkee and reverse.
zahlman 2 days ago [-]
I don't know how good it was, but sentiment analysis was definitely a thing pre-ChatGPT.
Retr0id 2 days ago [-]
It was pretty basic though, and even a frontier LLM might struggle to infer that OP is a negative-sentiment link, without sufficient context.
Aurornis 2 days ago [-]
rel=nofollow is used to signal that links should not be used by search crawlers for authority calculations on most sites with user-submitted content, including Hacker News.
You basically have to use nofollow for comments otherwise your site becomes a big target for SEO link spam.
politelemon 2 days ago [-]
I wonder if we ought to be flagging it then? There's already so much uninteresting AI slop observations.
whstl 2 days ago [-]
This has been the status quo for more than a decade.
In the past SEO blogspam was done by cheap freelancers, and there were several agencies selling the service.
Experts identify blogspam quite easily, but laypeople eat it up and use as reference in conversations and to make decisions.
Google has known about it, has been in contact with such agencies and companies, and has been refusing to do anything about it for the longest time.
bakugo 2 days ago [-]
> I suspect we'll address this
Who is "we"? Definitely not Google or any other major tech company, they're all actively encouraging this.
> trusted sources.
What trusted sources are there that haven't yet been taken over by AI?
dvfjsdhgfv 2 days ago [-]
> Who is "we"? Definitely not Google or any other major tech company, they're all actively encouraging this.
Google has been fighting aggressively to replace its search results with snippets, now generated by LLMs, to avoid sending traffic to other websites. If they continue, they will basically lead Google Search to a tipping point where a good competitor can take this market by storm. Microsoft also believed Windows is indestructible and now they have a rude awakening.
onion2k 2 days ago [-]
The fact is what people really want from a search engine is a single perfect result that answers their query exactly. An LLM does the 'single result' bit, but it's dubious whether or not it's a perfect answer. Most of the time that's probably not very important so long as the answer satisfies the search enough that the user is happy.
Google is trying to turn Search into that product e.g. the single answer to a given search. They could do that now with Gemini, but the ads in the results are what makes them money, and the backlash to embedding adverts into the output of Gemini would drive millions of people to OpenAI overnight. They have to do it slowly. Give it 5 years though, and search engine results pages will be a thing of the past.
dvfjsdhgfv 2 days ago [-]
> Most of the time that's probably not very important
Well... Maybe, but what's the point of an answer if you can't trust it? For ultra-fast answers for unimportant stuff I keep Cerebras tab open.
onion2k 1 days ago [-]
What I meant was that it'd a good answer but maybe not perfect. For example, if you ask for a coffee recommendation you might get something that's in your top 5, but not number 1. That's better than getting a page of links where the top 3 have paid to be there, the next 5 are SEO-farms, and then maybe there's a site about coffee that will answer your question.
2 days ago [-]
vohk 2 days ago [-]
I don't have a ton of hope just yet because I think it's still an incentives problem rather than a technical one.
I got tired of the increasing AI slop in my YouTube Music feed and switched to Deezer a few months ago. Since then, not a single AI artist I've been able to spot. If a relatively marginal player like that can manage it, why can't Spotify or YTM? My suspicion is simply that Deezer actually actually tries.
It's the same problem with Google and search. Kagi and others have demonstrated that you can produce better results with an infinitesimal fraction of the budget, and Google is still plenty competent where they care to be. This won't start to get fixed until they see a financial incentive to do so.
conception 2 days ago [-]
Spotify 100% rather buy/produce AI music than pay artists. Also they demonetized most of their artists so if they can pump AI songs that sound enough like what you listen to and then stop promoting them they don’t have to pay anyone.
VladVladikoff 2 days ago [-]
Maybe it’s that AI music isn’t being spammed as hard at ‘platform I’ve never heard of before’?
vohk 2 days ago [-]
That's likely a factor but Deezer reports that's it's 28% of their ingest as of last September. Being a smaller target doesn't account for all of it, or that openly AI "artists" are not being delisted from the larger platforms, nor are they providing ways to filter them out.
Its a public good we refuse to turn into a government service for nebulous reasons.
ctoth 2 days ago [-]
[dead]
2 days ago [-]
eh_why_not 2 days ago [-]
It's becoming much harder to determine on a daily basis what content is original, thought-out by a person, and trustworthy. Ironically, verifiably-old content is easier to trust now. Examples from recent personal experience:
1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.
2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.
3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".
Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.
lconnell962 2 days ago [-]
Something I've recently started seeing, maybe even an emerging #4 is AI generated translations. You could have someone very intelligent, making well written subject matter expertise. Or just someone who has valid thoughts they wish to express to the world in a language more of a common tongue than their own.
Or on the other end you could have someone who wrote a sentance or two in their language and had some combination of AI generation and translation algorithm bloat it out.
In both cases you will get something that can look right and well thought out or explained, but probably will have at least some of the AI slop signs present. I don't know what the solution is for this type given claims Google Translate has started to do this kind of translation for people. An AI translation is probably just as prone to hallucinations as any other AI, but it probably will look more natural to readers than a direct translation.
anal_reactor 2 days ago [-]
You're making the classic mistake of looking for a trustworthy information source and then trusting it, instead of focusing on whether the information itself is trustworthy regardless of source. It's literally the same as my grandma saying "they said so on TV, therefore it must be true" while completely dismissing anything I've read on the internet because reasons.
If you develop the skill of judging information by its merit rather than source, you won't mind AI-generated content as long as it's helpful.
I talk to LLMs a lot. It's fucking great. Do I take everything they say at face value? No. But neither do I take at face value things that biological intelligence outputs.
xboxnolifes 2 days ago [-]
Information itself cannot be trustworthy. It can be right, it can be wrong, or it can be somewhere in between. Only a source can have trustworthiness, as it's a mixed measure of reputation and provable accuracy.
You filter out known untrustworthy sources to not waste your time verifying false information 100x more than you need to. I know The Onion is a satire publication. I do not need to verify its claims. It's an intentionally untrustworthy source. I know that LLMs can hallucinate information, so I verify with a more trustworthy source. I cross-reference things random people say on the internet, because random people on the internet are not, individually, trustworthy sources of information.
If a rocket engineer explains to me why Rocket A isn't flight ready, I'm more inclined to believe them than if a random commenter on the internet explains it to me. Because the one source is more trustworthy than another, and if I wanted to verify the claim myself I'd have to spend a lot of time studying rocket science.
eh_why_not 2 days ago [-]
No it's not the same as your grandma. The point is that it's now more expensive to find the correct information to learn from. You don't know it's an LLM ahead of time, and you may spend hours until you figure out something is off. Hence why reputable sources will become more valuable.
> If you develop the skill of judging information by its merit rather than source..
Did you read example #1? I'm not talking about some piece of code from an LLM that you can verify or some political opinion that you can take with a grain of salt, but information that you can only gain and/or judge through expertise:
If you're not a physicist yourself, you can't judge "information by its merit" on specific physics topics, because you don't have a solid baseline.
Similarly, in growing plants, each plant has its own peculiarities, and only people experienced in growing it can tell you anything useful - it's knowledge accumulated by trial and error. Not knowledge that your "great discerning mind" can assess on its own. Even a botanist can't tell you the ideal growing conditions of a plant that they've never studied before.
anal_reactor 2 days ago [-]
What if your physics book is wrong because knowledge has advanced since it was released - you can still find lots of publications and people with degrees blissfully unaware of Hawking Radiation. What if your botanical book is wrong because facts have changed since then - climate is changing and so does flora. What if your book is wrong because it's state-funded propaganda mixed with petty fights of a bunch of people with suits and strong opinions disguised as academia - a huge chunk of linguistics is dealing with exactly this issue.
Again, you seem to miss the point that the idea of questioning new information, which was already useful to navigate life before LLMs, before television, before newspaper, before print, before clay tablets, even before speech itself, is equally applicable to LLMs as to any other form of communication. You just need to upgrade your strategies a little and that's it. Don't blow this out of proportion "somebody gasp lied to me on the internet!".
predkambrij 2 days ago [-]
Well, if not disclosed you could assume that somebody did due diligence for you, and could include sources. I don't even trust LLM even if all the information is included in the context window if I need reliable information. Trying to make money on slop is really bad manners. It's a scam, you can't call it otherwise.
Btw, I like AI, it did a ton of value for me. We just need to find a way to live with it, without getting doomed in misinformation.
rcxdude 2 days ago [-]
You do ultimately need to trust some sources to some degree. You can try to cross-correlate multiple sources (and this is in general a good habit!), but that depends on some level of trustworthiness in the sources you are looking at, you're not at all immune to misinformation by doing this (especially if multiple sources are, undisclosed, being generated from the same LLM. You can also get citenogenesis even pre-LLMs). And of course for some things it's possible to try to verify directly yourself, but this is infeasible to do for everything you depend on.
SpicyLemonZest 2 days ago [-]
There’s a lot of things where this just doesn’t work. I was wrong about a lot of business strategy things when I was younger, to the point where I rejected what I now see were correct arguments against my view of things. How could I have gotten out of that trap without the ability to find trustworthy sources?
predkambrij 2 days ago [-]
I feel for you. I was looking for some wildlife events on Youtube, only to find that all of them were AI generated, trying to get views. I can only find content somehow reliable if I put filter for content before of AI era.
visarga 2 days ago [-]
Humans are also unreliable, we are competing for scarce attention, platforms decide what gets visibility and we cater to their algorithms. You could say humans are prompted by feed ranking AI - what and how to publish.
fn-mote 2 days ago [-]
I thought somebody counted them… incredibly, the log message admits to committing 12,000 articles.
I guess that means the log message was authored by AI as well. Figures.
shevy-java 2 days ago [-]
I am kind of upset at github that we can not easily block AI content
coming from their site.
nickvec 2 days ago [-]
It’s simply not possible to enforce at scale. How can you definitively say whether something is AI or not?
arcza 2 days ago [-]
So whatever OneUptime is, I now know it has zero integrity and is something I should avoid.
2 days ago [-]
jpdb 2 days ago [-]
I've been seeing this company in ~all of my searches across various tech topics.
They're absolutely dominating search results. The quality isn't terrible, but there's so much content that I can't trust them to be accurate.
TrackerFF 2 days ago [-]
I've seen an increase in this "firehose" tactic among the passive-income folks, where the idea is to just saturate certain niches with AI-generated content, and collect some cents here and some cents there - in the hopes it will generate as much money as maintaining a single high-quality content channel.
Don't know if they actually make any money doing it like that. A couple of weeks ago I stumbled across some content-creator that said he had hundreds of faceless YouTube channels, which was made possible due to AI tools.
iLoveOncall 2 days ago [-]
My son and his friend made a YouTube channel that's just brainrot memes that, while they do it manually, could easily be fully automated by AI (or even without AI).
They have 17 million views in 2 months.
The strategy of spamming trash no-effort content definitely pays.
swores 2 days ago [-]
For just $199, I'll sell you my PDF explaining exactly how to do this well enough to make WAY more than only "some cents here and some cents there". Special limited time offer for HN readers, reduced from my normal price of $1,489!
P.S. Or get it free when buying my $499 "how to make money selling people how to make money guides" guide!
(/s. I generally think HN comments should avoid jokes unless they're genuinely really cleverly funny, which this comment isn't - I only justified it to myself by the fact that the sort of people selling these trashy guides are the same people doing what you're talking about, and I feel they deserve mockery and shaming.)
ThrowawayR2 2 days ago [-]
If the dead Internet theory wasn't true before, it sure will be soon.
post-it 2 days ago [-]
It's kinda exciting. The social media status quo has its upsides but a lot of downsides. I'm hopeful that the change will be good. We'll have to figure out a way to authenticate the people we're talking to, which will encourage tighter-knit communities.
dataviz1000 2 days ago [-]
This will end with the only way to authenticate the people we're talking to is meeting them at the coffeeshop in the morning.
post-it 2 days ago [-]
That might be okay. We'd lose a lot, obviously, but if you could 100% trust that the person you met at a coffee shop is real, and you could 99% trust that the person they met the day before is real, and you could 98% trust the person that person met is real, you've got three degrees of Kevin Bacon.
abathur 2 days ago [-]
But can you trust that the things they say aren't just laundered AI blogspam?
post-it 2 days ago [-]
Well I trust that the things my friends say aren't laundered AI blogspam. And if they trust the things their friends say, I can likely trust that too.
Tepix 2 days ago [-]
Did you forget about Blade Runner?
hackable_sand 2 days ago [-]
... Did you ...?
arctic-true 2 days ago [-]
Until the humanoid robots gain the ability to process caffeine, then we’re all hosed.
kawfey 2 days ago [-]
is anyone using keybase any more? i put it on my website and socials to do just that but it doesn’t seem to have stuck around.
post-it 1 days ago [-]
It's not really the right kind of authentication. A bot can use keybase too.
agilob 2 days ago [-]
Dead Internet is a product now, why aren't you monetizing it yet?
MattGaiser 2 days ago [-]
I would argue SEO should already be considered dead internet theory. Most of it is not intended to do anything but convince Google.
A dentist buying freelance articles from a guy off Upwork is not intending to communicate anymore than this guy generating articles is.
shevy-java 2 days ago [-]
SEO also showed that Google abuses its market position. One wonders why the USA promotes a de-facto monopoly here.
shevy-java 2 days ago [-]
Only if we allow it to happen. It is time for the Empire of common man and woman to strike back against AI slop and companies that promote it - such as microslop.
senordevnyc 2 days ago [-]
Common man and woman don’t care that much.
pilsetnieks 2 days ago [-]
Great point! At this point the Dead Internet Theory isn't a conspiracy – it's a roadmap. It's worth noting he distinction between "authentic" and "synthetic" online spaces is eroding faster than most people realize – that's a genuinely important conversation to be have.
/s
IsTom 2 days ago [-]
> to be have.
Meatbag spotted, get 'im boys.
nubg 2 days ago [-]
Great imitation
pilsetnieks 2 days ago [-]
The dead internet theory terrifies me. I don't think we're at the point where it's mostly dead but we're already way past the point where any discussion worth anything can be had on the internet itself. The problem is not that everything could be AI slop but that anything could. It simply takes the wind out the sails and makes one question what's even the point if anything could just be written by a clanker. Anything you write could just be screaming out into the void, affecting no one, and just maybe adding to the training corpus for the next generation of clankers.
Just writing this made me question "what's the point" several times. If you or anyone replies cogently, I still won't have any idea if it's a person or a Chinese room.
shevy-java 2 days ago [-]
> The dead internet theory terrifies me. I don't think we're at the point where it's mostly dead
Well - I would say the internet is not totally dead yet, but we approach the point of it being very useless now. I remember the 1990s era and early 2000s - it was almost innocent compared to the total slop era we have now. Young people today don't even know that Google Search was useful at one point in time. If you use Google Search now, you get so much irrelevant crap output that it is really useless now.
thadt 2 days ago [-]
Ironically, the reason I used Google the most then was because it indexed Usenet while so many other parts of the Internet offered by the other engines were "slop". My, how the turn tables.
MattGaiser 2 days ago [-]
One of the issues is that the purpose of business internet writing is not to be read, but to be ranked well.
bakugo 2 days ago [-]
I think the bigger issue is that the percentage of internet writing that can be classified as "business writing" is growing significantly, now that the effort required to produce it is literally zero.
Overall, it feels like no matter where you go on the internet, it's impossible to dodge content that exists primarily for the purpose of extracting money from the reader in some way. SEO spam blogs, AI startups shilling their latest product, AI generated stories posted to reddit that casually slip in a mention of how the supposed author has recently won money on a gambling website. It's all the same thing, really.
thm 2 days ago [-]
By now, Google is smart enough to not even index this garbage.
AndroTux 2 days ago [-]
I wish that were true.
henry2023 2 days ago [-]
Search "Redix for Redis connections in Elixir". This Blog slop is second result.
Google encourages this.
raincole 2 days ago [-]
Serious question: What is this post about and why should we care? It's a repo with 35 stars. Is adding 12,000 posts in a single commit somehow technically difficult or significant?
bakugo 2 days ago [-]
You should care because this website has a high ranking on Google and these 12000 posts will show up every time you search something programming related.
conception 2 days ago [-]
Stop using Google. Kagi lets you block and prioritize sites.
bakugo 2 days ago [-]
I have used Kagi, it's not a suitable replacement. It still struggles with relevance even compared to the garbage that is current-day Google, and is particularly bad at finding recent (less than a month or two old) information.
kawfey 2 days ago [-]
Likewise. Every query returns dozens and dozens of AI generated domains and blog posts like this. It’s MUCH better than google with filters and small web but still sad.
DrewADesign 2 days ago [-]
I heard that they might have fixed the problem, but I initially dropped it when they stopped respecting quotes, even in verbatim mode. Like, if I’m looking for an obscure product number, I don’t want a bunch of shit with a few digits off if there are no actual hits. I want no hits if the settings and query demand it.
radicality 2 days ago [-]
About to go do that on Kagi for the linked site. Oh and also hit the “Report this site as AI generated”
encom 2 days ago [-]
>Stop using Google
I've been using DuckDuckGo for years now, but their search results have now become so terrible, it's nearly unusable. And I have to "quote" every word, otherwise it just randomly omits it from search for no reason. Honestly it's so bad I wonder if all the developers left, and the site is just coasting along.
Maybe I'll try Kagi, but it's not something I can pitch to normies.
self_awareness 2 days ago [-]
This post is added because it's so easy and to show that it's being done in real life. That we can't have nice things, because of mindless people like Nawaz Dhandala.
raincole 2 days ago [-]
I'm quite sure in every passing second people are pumping more AI slop to the internet. I just don't see why this is something special (unless it's a well-known project among HN users that I'm not aware of.)
self_awareness 2 days ago [-]
I'm also quite sure, but this is the proof, not hypothesis -- with git commits and all.
petterroea 2 days ago [-]
This is why i never trust blog posts any more. If a company logo is attached its just SEO garbage
2 days ago [-]
hirako2000 2 days ago [-]
> All content must be original and not published anywhere else.
Do what I say, not what I do.
CrzyLngPwd 2 days ago [-]
There doesn't seem to be a workable plan for how to cope with the onslaught of AI output, and it's going to get much worse.
The sentinel servers, meta/google/ms/etc. just seem to be largely ignoring it, or even supporting it.
It's already nauseatingly common on all major platforms.
gib444 2 days ago [-]
"Showing 1 - 25 of 45488 posts"
I miss the days when we could assume that's just a pagination code bug
vova_hn2 2 days ago [-]
It's "Showing 1 - 25 of 58891 posts" now. HN tells me that your comment was posted 6 minutes ago, which gives us approximately 37.23 posts/second rate.
wartywhoa23 2 days ago [-]
AI is the stellar moment for all mediocrity and conmen.
miyuru 2 days ago [-]
Commit maker is here and have only posts slop here as well.
I have visited a blog on this site while searching for something. Suffice to say it was a very shoddy attempt at a blog and at this point I should just network block this site entirely
2 days ago [-]
StrLght 2 days ago [-]
I am so glad DuckDuckGo allows blocking specific sites from the search. Just did this for a domain linked in this repository.
encom 2 days ago [-]
It would be nice if DDG made even a token attempt at making their search not shit. I still use it, but mostly out of habit and because I suspect every alternative is also shit.
konradx 2 days ago [-]
So now you don't get any hits from "Hacker News" ? :-)
2 days ago [-]
avian 2 days ago [-]
Just this morning I opened up my RSS reader and found that it was flooded by weird, twisty prose exalting the virtues of online gambling. Since I follow a few blogs that post long form content I first thought this was satire or something, but after reading for a bit and seeing that the posts just never end my best guess was it's just AI slop indented to drive traffic to some gambling site - not clear which since there were not links. All posts came from a RSS feed of an apparently abandoned tech blog I was following that had the last legit post in 2020. My guess is the domain expired, a squatter bought it, saw a bunch of requests for the RSS feed and grabbed the opportunity. Although to what end I'm not sure.
camdenreslink 2 days ago [-]
For every sign up to that gambling site from their affiliate link they make a few bucks (sometimes many few bucks).
fragmede 2 days ago [-]
> not clear which since there were not links.
How does that work tho?
camdenreslink 7 hours ago [-]
Oh interesting, then it was likely the owner of the gambling site itself doing this shady stuff.
troupo 2 days ago [-]
Ironic, considering the README:
--- start quote ---
These blog posts are written by the OneUptime team and open source contributors. We write about our experiences, our learnings, and our thoughts on the world of software development, Kubernetes, Ceph, SRE, DevOps, Cloud and more. We hope you find our posts helpful and insightful.
--- end quote ---
Steppphennn 2 days ago [-]
I don’t see how the author isn’t embarrassed. Maybe it’s just me having imposter syndrome or maybe I can self reflect, maybe. If he used AI to slop all those articles up doesn’t he know any developer can use AI to get that content through the IDE? He’s trying to game something with a tool that effectively killed off that game in the first place.
ThrowawayR2 2 days ago [-]
Getting a check for advertising revenue overcomes all sorts of embarrassment.
moomoo11 2 days ago [-]
I’m a south asian guy so I’ll just say it. I’m not surprised anymore when a lot of scammy/scummy behavior turns out to be done by a south asian.
In sf too most of the scammers and scummy founders are south asian.
It’s gross and honestly as a south asian doing something legit it sucks to see them just fulfilling a stereotype.
These assholes are the same types responsible for why those societies are fucked up, being in SF most south asians I’ve met are from super wealthy families there that exploit people. Not surprising their new generation is exploiting too.
Downvote me if that upsets you but someone’s gotta call it out.
wormpilled 2 days ago [-]
I appreciate you for saying that. It's something we're going to have to deal with, high trust societies institutions getting eroded by these types of people only interested in getting theirs.
What particularly bugs me about this, is the made up white man to paint as the slop author.
You can fabricate a professional business image in a few days with AI now. It's going to be hard to build an honest brand when everyone is going to point and say "vibe coded slop" because of examples like this website.
I'm already seeing such comments whenever someone posts an app on /r/macapps and it's really discouraging for beginners. If I would have met that resistance and amount of mean comments when I launched Lunar, I would have probably never put in that amount of effort.
noslop 2 days ago [-]
"This enhancement improves the user experience by showcasing positive feedback from customers"
you can't make this up
r_lee 2 days ago [-]
I've seen this blog slop on Google for the last month or so, no action taken whatsoever. it's mostly bullshit or regurgitated info from docs.
like Google or their Search team really doesn't seem to care at all. all of a sudden a random blog website just happens to rank first page on every topic
tempest_ 2 days ago [-]
Google is not incentivized to show you good results. You don't pay them, advertisers do and that is who they are working for.
Their job is provide you just enough "results" that you don't or cant go any where else.
No more, no less.
gibsonsmog 2 days ago [-]
Louis Rossman recently posted a video (https://www.youtube.com/watch?v=II2QF9JwtLc) where he had Gemini replace his 10+ year carefully curated content with AI slop and he instantly shot (back) up to the top of the rankings. They're very clearly favoring their own generated generic content rather than any sort of organic, well written or well informed entries. Shame.
emsign 2 days ago [-]
There's AI features and tips in Youtube's Creator Studio, they are encouraging creators to use AI tools. Makes sense that they also then reward videos that make use of it. That's how these platforms nudge people into products and behavior that they want to bring to market.
masfuerte 2 days ago [-]
Google used to prioritise search quality. About six years ago they decided to enshittify. Slop with more adverts is promoted over quality with fewer adverts. This isn't speculation. It came out in emails released as part of antitrust discovery.
To reiterate: Google search is shit now because they want it to be.
fg137 2 days ago [-]
Sounds like a good argument for using Kagi.
whycombinetor 2 days ago [-]
If it's between a human or an AI copywriting SEO slop, I'm happy to see an AI take that job. SEO content marketing is so painful to read once you realize you're reading it, and I have to imagine it's as painful to write if you're a technically talented writer.
swores 2 days ago [-]
I agree with you about the majority of "SEO content marketing", but a small minority of it is done by companies who genuinely care about doing good content, that doesn't only act as lazy SEO benefit but also as good marketing for people who read it.
It's a lot harder / more expensive to produce, as it needs (at least before AI, and I guess still to some extent even using AI for now) to be written by someone on the team who genuinely understands the company's technology/product/whatever well enough to educate other people about it in an interesting way, rather than it being written by low wage SEO writers who just need a list of keywords to include in the drivel that is the sort of content you're talking about. So it makes sense that most companies go with the cheap option, but it's always nice to come across ones who produce actual interesting articles.
(It's what I've always opted for when I've overseen marketing budgets, and I think the ROI is usually worth it since balancing the extra cost is the fact that the benefits go from just SEO, to SEO + word of mouth of people sharing the interesting article they read, and the awareness of the brand that comes with it. So I recommend anyone who normally chooses lazy, low quality content for SEO to consider the upgrade!)
srhyne 2 days ago [-]
I’ve naturally landed a handful of their posts recently through search. I was impressed with the quality.
Interesting to see this after the fact.
ieie3366 2 days ago [-]
Ironically due to slop I feel like we are regressing as a civilization
2020, want to know how to use Redix for Redis connections in Elixir? Google it and the results were most likely high quality, written by senior engineers who knew what they were doing
Today google that, and it will be endless amounts of slop
bakugo 2 days ago [-]
> Redix for Redis connections in Elixir
I googled this exact sentence, and the third result was a link to the blog this post is about.
Grim.
progbits 2 days ago [-]
I'm guessing there is a bit of a feedback loop now since people try this, search for the slopsite and click it, boosting it higher. For me it was top result (in incognito, not personalized).
Two things you can do:
- Navigate back and open another link. This signal is used to downrank for given query (google assumes the site did no provide satisfying answer)
- Explicitly provide result feedback. Unfortunately there isn't a category for "this is slop" but "inaccurate" works.
elcapitan 2 days ago [-]
For some searches I've started to limit the date range to pre 2023. That drastically improves search results (DDG, but I imagine Google as well). As long as you're looking for more long term information/posts ofc.
johnbarron 2 days ago [-]
>> Ironically due to slop I feel like we are regressing as a civilization
- Apollo had a significantly higher accepted risk. Apollo 1 or 13 would be untenable today.
- The percent of 13-year olds that made it into and through eight grade was significantly smaller in 1912. Your average poor farming kid did not go to eighth grade.
johnbarron 6 hours ago [-]
>> Apollo had a significantly higher accepted risk.
Apollo would never risk the astronauts lives like NASA is doing:
Apollo 10 was a dress rehearsal for the Moon landing, with the Lunar Module flying to within 9 miles of the lunar surface. including having the lunar module descent to within 15 kilometers of the lunar surface but the crew did not attempt a landing.
Artemis is testing a new heat shield for the first time, with humans on board, that has only been validated by computer models, and almost failed catastrophically in Artemis I.
The difference between then and now, is that now you have a commission of public servants and then you had Harvey Allen and Wernher von Braun.
We are all quickly becoming allergic to AI writing.
To fool us into thinking writing is not AI generated, we will create "human-ifying" filters to the LLM. This will introduce common keystroke, grammar, and spelling issues that surely no automation would ever create on its own.
Soon the writing most vaunted and trusted will be the writing that appears written by a 4 year old with a crayon.
Sigh.
ssl-3 2 days ago [-]
[dead]
nelsonfigueroa 2 days ago [-]
Well, at least they're not exactly hiding it.
WJW 2 days ago [-]
Github only reports 5012 changed files though.
sigmonsays 2 days ago [-]
when AI starts training itself accidentally on AI generated content, we all lose...
Why would anyone read AI generated blog posts when I can just ask AI for what I need already
For gaming SEO this is still bad, no backlinks.
2 days ago [-]
2 days ago [-]
cebert 2 days ago [-]
What is the point of this?
wiether 2 days ago [-]
Feeding the beast
nelsonfigueroa 2 days ago [-]
SEO purposes would be my guess
nunez 2 days ago [-]
Welcome to the slop age!
tadfisher 2 days ago [-]
> Showing 1-25 of 58891 posts
I have to imagine that one quality post worth reading would be linked in multiple places, thus would beat tens of thousands of slop articles for SEO purposes?
Retr0id 2 days ago [-]
You'd think, but very low quality AI-generated content regularly makes it to the HN front page, so it's just a numbers game.
sgbeal 2 days ago [-]
> I have to imagine that one quality post worth reading ...
As the Berliners say:
"Die Hoffnung stirbt zuletzt"
or:
"Hope is the last thing to die" (or "hope dies last" if one prefers a literal translation)
nicbvs 2 days ago [-]
Trying to hide all their CVE behind AI slop
whattheheckheck 2 days ago [-]
If this shit can come from the llms why are we redragging it out of them?
To reverify correctness?
antiloper 2 days ago [-]
"Nawaz Dhandala"
self_awareness 2 days ago [-]
I nominate Nawaz Dhandala as "the king of AI slop"
sph 2 days ago [-]
He's just an idiot doing it in public, because there are people generating hundreds of posts a day for years now without committing it on github under their real name.
ugiox 2 days ago [-]
Now we know why GitHub has a hard time with stability and reliability. Because of this AI slop BS inflicted on us by the Silicon Valley tech bros and all their followers.
I like the topics! "Grpc-native microservices" is a wonderful piece of nonsense!
djoldman 2 days ago [-]
"Showing 1 - 25 of 58891 posts"
Seems to check out.
progbits 2 days ago [-]
It's actually hilariously bad.
If you go to the last (2356th!) page, you will see eight posts from 2023 and 2024, mostly few months apart. (But even none of those are good)
Then in 2025 @nawazdhandala starts going wild with 22 articles on January 6th. And from that point on it's basically just all him and it keeps accelerating.
People gaming the content based algorithms will eventually cause their own downfall.
(By coincidence, see also https://news.ycombinator.com/item?id=47641829)
I believe it is nuanced enough to have different rank per “topic”, or “keyword” etc. but admittedly just kinda guessing from the outside.
The last time I tried to build something like this I realized it’s useless without first having a gigantic amount of data already crawled. When I started crawling I realized I would never catch Google. I think without Wikipedia the LLMs might have taken 10 more years to surpass them.
You basically have to use nofollow for comments otherwise your site becomes a big target for SEO link spam.
In the past SEO blogspam was done by cheap freelancers, and there were several agencies selling the service.
Experts identify blogspam quite easily, but laypeople eat it up and use as reference in conversations and to make decisions.
Google has known about it, has been in contact with such agencies and companies, and has been refusing to do anything about it for the longest time.
Who is "we"? Definitely not Google or any other major tech company, they're all actively encouraging this.
> trusted sources.
What trusted sources are there that haven't yet been taken over by AI?
Google has been fighting aggressively to replace its search results with snippets, now generated by LLMs, to avoid sending traffic to other websites. If they continue, they will basically lead Google Search to a tipping point where a good competitor can take this market by storm. Microsoft also believed Windows is indestructible and now they have a rude awakening.
Google is trying to turn Search into that product e.g. the single answer to a given search. They could do that now with Gemini, but the ads in the results are what makes them money, and the backlash to embedding adverts into the output of Gemini would drive millions of people to OpenAI overnight. They have to do it slowly. Give it 5 years though, and search engine results pages will be a thing of the past.
Well... Maybe, but what's the point of an answer if you can't trust it? For ultra-fast answers for unimportant stuff I keep Cerebras tab open.
I got tired of the increasing AI slop in my YouTube Music feed and switched to Deezer a few months ago. Since then, not a single AI artist I've been able to spot. If a relatively marginal player like that can manage it, why can't Spotify or YTM? My suspicion is simply that Deezer actually actually tries.
It's the same problem with Google and search. Kagi and others have demonstrated that you can produce better results with an infinitesimal fraction of the budget, and Google is still plenty competent where they care to be. This won't start to get fixed until they see a financial incentive to do so.
https://newsroom-deezer.com/2025/09/28-fully-ai-generated-mu...
Its a public good we refuse to turn into a government service for nebulous reasons.
1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.
2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.
3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".
Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.
Or on the other end you could have someone who wrote a sentance or two in their language and had some combination of AI generation and translation algorithm bloat it out.
In both cases you will get something that can look right and well thought out or explained, but probably will have at least some of the AI slop signs present. I don't know what the solution is for this type given claims Google Translate has started to do this kind of translation for people. An AI translation is probably just as prone to hallucinations as any other AI, but it probably will look more natural to readers than a direct translation.
If you develop the skill of judging information by its merit rather than source, you won't mind AI-generated content as long as it's helpful.
I talk to LLMs a lot. It's fucking great. Do I take everything they say at face value? No. But neither do I take at face value things that biological intelligence outputs.
You filter out known untrustworthy sources to not waste your time verifying false information 100x more than you need to. I know The Onion is a satire publication. I do not need to verify its claims. It's an intentionally untrustworthy source. I know that LLMs can hallucinate information, so I verify with a more trustworthy source. I cross-reference things random people say on the internet, because random people on the internet are not, individually, trustworthy sources of information.
If a rocket engineer explains to me why Rocket A isn't flight ready, I'm more inclined to believe them than if a random commenter on the internet explains it to me. Because the one source is more trustworthy than another, and if I wanted to verify the claim myself I'd have to spend a lot of time studying rocket science.
> If you develop the skill of judging information by its merit rather than source..
Did you read example #1? I'm not talking about some piece of code from an LLM that you can verify or some political opinion that you can take with a grain of salt, but information that you can only gain and/or judge through expertise:
If you're not a physicist yourself, you can't judge "information by its merit" on specific physics topics, because you don't have a solid baseline.
Similarly, in growing plants, each plant has its own peculiarities, and only people experienced in growing it can tell you anything useful - it's knowledge accumulated by trial and error. Not knowledge that your "great discerning mind" can assess on its own. Even a botanist can't tell you the ideal growing conditions of a plant that they've never studied before.
Again, you seem to miss the point that the idea of questioning new information, which was already useful to navigate life before LLMs, before television, before newspaper, before print, before clay tablets, even before speech itself, is equally applicable to LLMs as to any other form of communication. You just need to upgrade your strategies a little and that's it. Don't blow this out of proportion "somebody gasp lied to me on the internet!".
I guess that means the log message was authored by AI as well. Figures.
They're absolutely dominating search results. The quality isn't terrible, but there's so much content that I can't trust them to be accurate.
Don't know if they actually make any money doing it like that. A couple of weeks ago I stumbled across some content-creator that said he had hundreds of faceless YouTube channels, which was made possible due to AI tools.
They have 17 million views in 2 months.
The strategy of spamming trash no-effort content definitely pays.
P.S. Or get it free when buying my $499 "how to make money selling people how to make money guides" guide!
(/s. I generally think HN comments should avoid jokes unless they're genuinely really cleverly funny, which this comment isn't - I only justified it to myself by the fact that the sort of people selling these trashy guides are the same people doing what you're talking about, and I feel they deserve mockery and shaming.)
A dentist buying freelance articles from a guy off Upwork is not intending to communicate anymore than this guy generating articles is.
/s
Meatbag spotted, get 'im boys.
Just writing this made me question "what's the point" several times. If you or anyone replies cogently, I still won't have any idea if it's a person or a Chinese room.
Well - I would say the internet is not totally dead yet, but we approach the point of it being very useless now. I remember the 1990s era and early 2000s - it was almost innocent compared to the total slop era we have now. Young people today don't even know that Google Search was useful at one point in time. If you use Google Search now, you get so much irrelevant crap output that it is really useless now.
Overall, it feels like no matter where you go on the internet, it's impossible to dodge content that exists primarily for the purpose of extracting money from the reader in some way. SEO spam blogs, AI startups shilling their latest product, AI generated stories posted to reddit that casually slip in a mention of how the supposed author has recently won money on a gambling website. It's all the same thing, really.
Google encourages this.
I've been using DuckDuckGo for years now, but their search results have now become so terrible, it's nearly unusable. And I have to "quote" every word, otherwise it just randomly omits it from search for no reason. Honestly it's so bad I wonder if all the developers left, and the site is just coasting along.
Maybe I'll try Kagi, but it's not something I can pitch to normies.
Do what I say, not what I do.
The sentinel servers, meta/google/ms/etc. just seem to be largely ignoring it, or even supporting it.
It's already nauseatingly common on all major platforms.
I miss the days when we could assume that's just a pagination code bug
https://news.ycombinator.com/submitted?id=ndhandala
wonder when will he submit them here.
How does that work tho?
--- start quote ---
These blog posts are written by the OneUptime team and open source contributors. We write about our experiences, our learnings, and our thoughts on the world of software development, Kubernetes, Ceph, SRE, DevOps, Cloud and more. We hope you find our posts helpful and insightful.
--- end quote ---
In sf too most of the scammers and scummy founders are south asian.
It’s gross and honestly as a south asian doing something legit it sucks to see them just fulfilling a stereotype.
These assholes are the same types responsible for why those societies are fucked up, being in SF most south asians I’ve met are from super wealthy families there that exploit people. Not surprising their new generation is exploiting too.
Downvote me if that upsets you but someone’s gotta call it out.
What particularly bugs me about this, is the made up white man to paint as the slop author.
https://github.com/mallersjamie
https://github.com/OneUptime/oneuptime/commit/538e40c4ae724e...
https://github.com/OneUptime/oneuptime/commit/2bc585df20e6bb...
You can fabricate a professional business image in a few days with AI now. It's going to be hard to build an honest brand when everyone is going to point and say "vibe coded slop" because of examples like this website.
I'm already seeing such comments whenever someone posts an app on /r/macapps and it's really discouraging for beginners. If I would have met that resistance and amount of mean comments when I launched Lunar, I would have probably never put in that amount of effort.
you can't make this up
like Google or their Search team really doesn't seem to care at all. all of a sudden a random blog website just happens to rank first page on every topic
Their job is provide you just enough "results" that you don't or cant go any where else.
No more, no less.
To reiterate: Google search is shit now because they want it to be.
It's a lot harder / more expensive to produce, as it needs (at least before AI, and I guess still to some extent even using AI for now) to be written by someone on the team who genuinely understands the company's technology/product/whatever well enough to educate other people about it in an interesting way, rather than it being written by low wage SEO writers who just need a list of keywords to include in the drivel that is the sort of content you're talking about. So it makes sense that most companies go with the cheap option, but it's always nice to come across ones who produce actual interesting articles.
(It's what I've always opted for when I've overseen marketing budgets, and I think the ROI is usually worth it since balancing the extra cost is the fact that the benefits go from just SEO, to SEO + word of mouth of people sharing the interesting article they read, and the awareness of the brand that comes with it. So I recommend anyone who normally chooses lazy, low quality content for SEO to consider the upgrade!)
Interesting to see this after the fact.
2020, want to know how to use Redix for Redis connections in Elixir? Google it and the results were most likely high quality, written by senior engineers who knew what they were doing
Today google that, and it will be endless amounts of slop
I googled this exact sentence, and the third result was a link to the blog this post is about.
Grim.
Two things you can do:
- Navigate back and open another link. This signal is used to downrank for given query (google assumes the site did no provide satisfying answer)
- Explicitly provide result feedback. Unfortunately there isn't a category for "this is slop" but "inaccurate" works.
Well after 50 years we cant reproduce what Apollo did, and I would doubt current students of the same age would handle a 1912 Eight Grade Examination: https://www.bullittcountyhistory.com/bchistory/schoolexam191...
- Apollo had a significantly higher accepted risk. Apollo 1 or 13 would be untenable today.
- The percent of 13-year olds that made it into and through eight grade was significantly smaller in 1912. Your average poor farming kid did not go to eighth grade.
Apollo would never risk the astronauts lives like NASA is doing:
"Artemis II is not safe to fly" - https://news.ycombinator.com/item?id=47582043
Apollo 2 to 6 were Uncrewed Tests
Apollo 10 was a dress rehearsal for the Moon landing, with the Lunar Module flying to within 9 miles of the lunar surface. including having the lunar module descent to within 15 kilometers of the lunar surface but the crew did not attempt a landing.
Artemis is testing a new heat shield for the first time, with humans on board, that has only been validated by computer models, and almost failed catastrophically in Artemis I.
The difference between then and now, is that now you have a commission of public servants and then you had Harvey Allen and Wernher von Braun.
I'll just leave this here: https://developers.google.com/search/help/report-quality-iss...
To fool us into thinking writing is not AI generated, we will create "human-ifying" filters to the LLM. This will introduce common keystroke, grammar, and spelling issues that surely no automation would ever create on its own.
Soon the writing most vaunted and trusted will be the writing that appears written by a 4 year old with a crayon.
Sigh.
Why would anyone read AI generated blog posts when I can just ask AI for what I need already
For gaming SEO this is still bad, no backlinks.
I have to imagine that one quality post worth reading would be linked in multiple places, thus would beat tens of thousands of slop articles for SEO purposes?
As the Berliners say:
"Die Hoffnung stirbt zuletzt"
or:
"Hope is the last thing to die" (or "hope dies last" if one prefers a literal translation)
To reverify correctness?
Seems to check out.
If you go to the last (2356th!) page, you will see eight posts from 2023 and 2024, mostly few months apart. (But even none of those are good)
Then in 2025 @nawazdhandala starts going wild with 22 articles on January 6th. And from that point on it's basically just all him and it keeps accelerating.
Scroll down a little and you'll see a huge block of posts dated March 31st