Would definitely recommend furry.engineer - but if that’s not your jam, pawb.fun is run by the same team. Reg is open for both with approval required, but the approval is just reading the rules and telling them a little about yourself. If they’ve approved you for Pawb Social, link that in the box, and they’ll probably approve you for the Mastodon instance too.
I’ve upvoted this but I’d just like to chuck in that I think Raven makes a lot of sense here. I’ve had posts deleted or hidden by automod bots on other sites and even when they’re restored they don’t get as much traction as the posts which were left alone. So there’s an effect even if the action can be “reversed” - and I say that in quotes because it’s not like you can turn the clock back.
Hard agree on the no use of shadowbans and keeping users informed, and the easy escalation to a human.
My ideal would be some kind of system which looks at the public feed for keywords and raises anything of concern to an admin, and maybe the admin’s response goes back in as ‘training’. Something more like SpamAssassin’s Bayesian ham/spam classifier perhaps.
I don’t think automated actions without a human in the loop is the right way to go - and I have grave concerns about biases creeping into the model over time. The poster child for this is pretty much Amazon’s HR resume’ review system ended up with racist biases. There’s been a lot of good progress improving PoC/BIPOC/BAME/non-white acceptance and it’d be a shame if something like this accidentally ended up scarring or undoing some of that.