In case you didn’t know, you can’t train an AI on content generated by another AI because it causes distortion that reduces the quality of the output. It is also very difficult to filter out AI text from human text in a database. This phenomenon is known as AI collapse.
So if you were to start using AI to generate comments and posts on Reddit, their database would be less useful for training AI and therefore the company wouldn’t be able to sell it for that purpose.
They probably want you to edit your comments to poison them.
They probably are using AI bots to make astroturf posts already.
Imagine how much it’s worth to Google to train an AI to recognize other AI generated posts. Imagine how much it’s worth to Google to have a training set of “poisoned” data (and to able to compare it to the original post, which they can do since reddit saves your edits on the backend). Not to mention training on genuine reaction by users to AI posts, to obvious poisoning. They’ll be able to use that to train their own AI to not be defeated by these issues.
I don’t know what should be done but I feel like trying to defeat the AI training actually plays right into their hands.