So this is an interesting snake-eating-its-tail problem. Current LLM models can only 'learn'[0] by reading human written text. However, when an LLM trains on text that is LLM generated, it reduces its 'knowledge' [0].
Up until ~2023, they were able to train on non-LLM data just in the general internet. Anyone who was a web host at the time probably saw a TON of scrapers going around with no real direction as all the AI companies were just trying to consume ALL of the internet. However, once OpenAI spilled the beans by publically releasing ChatGPT, their gravy train was over!
The internet, almost overnight, became significantly AI generated. You can see this by googling things and realizing that 90% of google results for some things are just the same regurgitated garbage over and over, as LLM generated websites are 'it'. So, AI doesn't really have a PLACE to train anymore, or at least, training data is VERY tough to find, because again, consuming LLM generated data harms their training. AND, if they can't keep consuming data, they will be 'stuck' historically at whatever time they stopped consuming! For a while, ChatGPT froze in late 2023 for exactly this reason.
IN this modern internet, curated 'human only' content is getting more and more valuable. There is a reason the AI companies are doing everything they can to get into active 'human only' communities, like reddit. They want to be able to consume that data!
All that to say: Forums like this unbelievably valuable to LLM models, particularly with the curation that mods do to keep AI out. And if a web forum gets overrun by LLMs and AI companies don't recognize it, they can effectively self immolate 😀
As far as the value of an AI forum: There is none. The purpose of a forum is for various people to learn and interact with eachother. AI doesn't benefit from that, and if you are just looking for an answer to something from an AI, just ask your model directly.
[0] LLMs actually can't 'learn' because they don't 'know'. They are just really fancy random-number generators with large state graphs.