-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making the moderation of the CTF server more automatic and painless #1157
Comments
First approach is plain "type in whatever X typed to ban Y so you can ban Z". As far as i understood, your machine learning whatever is based on reports themselves, so the "smart" deus ex machina will never digest the report contents and compare them to the actual "happenings" on the server, so it'll ban exclusively on report syntax. Needless to say, it is trivial to enter the server with new login and password, so you'll end up with name-agnostic fact-agnostic instant ban machine. And how are you going to separate cheaters from actual high-skill players or combat the zero-skill crybabies? The interesting part is that you'll get the actual feedback only in two cases - once you run out of players to ban, or once it bans someone "important", as i doubt lots of people will rush to your ban dispensing discord server inquiring about their ban reason. They'll just drop the server. Second approach is abusable, albeit very inconvenient, votekicking by majority. Except the majority is pretty much a bunch of gatekeepers, so you might eventually end up with having a bunch of idiots there (aka friends of friends of moderator). |
I've coded something that makes moderation way easier. However, I'm not willing to share it here. |
Problem
In games like CTF which unlike building Minetest servers, everything happens in real time. And this also requires moderation with minimum delay. The more delay till appropriate action is taken, the more frustrated the players online will become as the cheaters, spammers or others violating the rules kill the fun of the game.
Current approach
In the current approach, there is a group of game moderators who have access to ban, kick and revoke permissions like shout or interact. There is also a secondary group called "Guardians" who have access only to kick players and have their reports in higher priority.
However, this approach has pitfalls. To keep the game healthy 24/7, enough guardians to cover the entire week and hours in the day are required. Getting a group of people who already cover the entire day is already not easy nor simple and when it comes to all the week, it becomes even worse. The other disadvantage is that different events in real life of people would distract them from doing the moderation. Thus a group of people we need dedicated to find trustworthy players and grant them Guardian role. All the process is done by hand and not automatic which makes it even worse.
Suggested approaches
Using Machine Learning to detect violations of rules which happen in the chat
This problem can be seen as a binary classification ML problem. And we already have a dataset in Discord. We have a history of chat in the
server
channel and we've got another channel namedreports
which tells us which messages or sequence of them are considered violations.Automatically recruit trustworthy players and let a group of them do some moderation task
Approach one
Approach two
TBD
The text was updated successfully, but these errors were encountered: