Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am starting to think these are not just AI scrapers blindly seeking out data. All kinds of FOSS sites including low volume forums and blogs have been under this kind of persistent pressure for a while now. Given the cost involved in maintaining this kind of widespread constant scraping, the economics don’t seem to line up. Surely even big budget projects would adjust their scraping rates based on how many changes they see on a given site. At scale this could save a lot of money and would reduce the chance of blocking.

I haven’t heard of the same attacks facing (for instance) niche hobby communities. Does anyone know if those sites are facing the same scale of attacks?

Is there any chance that this is a deniable attack intended to disrupt the tech industry, or even the FOSS community in particular, with training data gathered as a side benefit? I’m just struggling to understand how the economics can work here.





>I haven’t heard of the same attacks facing (for instance) niche hobby communities. Does anyone know if those sites are facing the same scale of attacks?

They are. I participate in modding communities for very niche gaming projects. All of them experienced massive DDOS attacks from AI scrappers on their websites over the past year. They are long running non-commercial projects that don’t present any business interest to anyone to be worth expending resources purely to bring them offline. They had to temporarily put the majority of their discussion boards and development resources behind a login wall to avoid having to go down completely.


Thanks. The scale of this is just mind-boggling. Unbelievably wasteful.

How many of these scrapers are written by AI by data-science folks who don't remotely care how often they're hitting the sites, and is data they wouldn't even think to give or ask the LLM about?

But does that explain all of the various scrapers doing the same thing across the same set of sites? And again, the sheer bandwidth and CPU time involved should eventually bother the bean counters.

I did think of a couple of possibilities:

- Someone has a software package or list of sites out there that people are using instead of building their own scrapers, so everyone hits the same targets with the same pattern.

- There are a bunch of companies chasing a (real or hoped for) “scraped data” market, perhaps overseas where overhead is lower, and there’s enough excess AI funding sloshing around that they able to scrape everything mindlessly for now. If this is the case then the problem should fix itself as funding gets tighter.


My theory on this one is some serial wantrepreneur came up with a business plan of scraping the archive and feeding it into a LLM to identify some vague opportunity. Then they paid some Fiverr / Upwork kid in India $200 to get the data. The good news is this website and any other can mitigate these things by moving to Cloudflare and it's free.

A couple of forums I have lurked on for years have closed up and now require a login to read.

I've wondered for a while if simple interaction systems would be good enough to fend these things off without building up walls like logins. Things like Anubis do system checks, but I'm wondering if it would be even easier to do something like the oldschool Captchas where you just have a single interactive element that requires user input to redirect to another page. Like you hit a landing page and drag a slider or click and hold to go to the page proper, things that aren't as annoying as modern Captchas and are like a fun little interactive way to enter.

As I'm writing this I'm reminded of Flash based homepages. And it really makes it apparent that Flash would be perfect for impeding these LLM crawlers.


Just as an additional anecdata point:

I run a small, niche browser game (~125 weekly unique users, down from around 1500 at its peak 15 years ago), and until I put its Wiki behind a login wall a few months ago, we were getting absolutely hammered by the bots. Not open source, not anything of particular interest to anyone beyond those already playing the game and the very select group of people who, if they found it, might actually enjoy it. (It's all text, almost-entirely-player-driven, and can be very slow at times, so people used to modern mobile games and similar dopamine factories tend to bounce off of it very quickly.)

Some of the UAs we saw included Claude and OpenAI, but there were a lot of obviously-bot requests to the Wiki that were using generic UAs and residential IPs.

If there's a concerted effort to swamp open-source projects, it's not the only thing going on. I think it's much more likely that the primary cause of this flood is people who a) think they have the right to absolutely everything on the internet, b) expect everyone they scrape from to be actively trying to hide the data from them (so, for instance, they will ignore any exposed API), and c) don't care either how many resources they use, or how much damage they do.


> I haven’t heard of the same attacks facing (for instance) niche hobby communities. Does anyone know if those sites are facing the same scale of attacks?

Yes. Fortunately if your hobby community is regional you can be fairly blunt in terms of blocks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: