I don't personally have any tools to check for this... I don't care for the relatively Jr level content myself, and often call out errors or at least oddness in the articles that I do read with a comment. I don't always read every article though, just a quick pass to clear obvious spam.
I could favor nuking the very jr and simple content... just wouldn't want to actually discourage anyone genuinely writing learner/beginner content.
That's a good point. AI-assisted writing is indeed a thing.
I wonder if there will ever be a tool so smart that could realiably detect how much of the content is genuinely human-made in an article.
That would definitely be a nice tool... I feel that this will only get more difficult in time though. Right now it's easy enough to prompt a few popular LLMs to see if you get similar text, but man, what will the cost be in the end.
Both in generated content as well as bot detection and anti-bot tooling. I'd hate to be github.
I can only say I fully expect that we may wee a return to more niche communities that are self-regulated, similar to BBSes of old. In only that the popular platforms for social media are already inundated with bot activity.