Tech.LGBT

Logo

Information Site for the tech.lgbt Fediverse Instance

15 September 2023

Addressing Moderation Concerns Raised by rage.love and simcha.lgbt

by tech.lgbt Moderation Team

We will be addressing the points raised here:

https://simcha.lgbt/@admin/111054336232329797 (if this post doesn’t resolve, put it in your fediverse instance search bar), as we believe this summarizes the current discourse around us into a single accessible post.

Regarding the first point, that of the weirder.earth fediblock post in February, we’d like to elaborate on the changes to our moderation process as a result. Since that event, we have been quiet about these improvements, as we hoped they could speak for themselves instead of verbal announcements with no follow-up, and we hope that the past seven months have been evidence that we’ve tightened up this ship.

However, as these improvements are understandably not as visible as we had hoped them to be (more on that later), we will be sharing a few of those here.

Starting with our moderation process itself. Where we had previously delegated reports to individual moderators to handle, in an effort to “spread the load”, we have realized that this approach was not working effectively enough. We introduced a reports channel in our Discord server where we create individual threads on every single report we receive, where:

Over the months, this approach has collectively identified weak points, brought out areas of expertise that existing moderators held, and we collectively became more aware of our individual biases and whom might be best suited to correct for them. (We understand that recognition and correction of biases is not a perfect way to go about it, but it is better than leaving decisions up to individual mods alone.)

One example of where we recognized that we lacked expertise in was an influx of Japanese users onto our instance over the past few months. We welcome and hope that people from the Japanese LGBTQ+ community can find a home here. But as more users arrived, we became aware of a few concerning trends from the Japanese side of fedi; mainly the ongoing infiltration of users defending pedophilia in the Japanese LGBTQ+ community. We received more and more reports for users on our instance who either appeared, or seemed to appear, to be part of this group (notably portraying themselves under the banner of “no discrimination”). But after a few notable internal near-accidents and collective understanding of the problem, we halted processing these reports until we could get a Japanese speaker to weigh in and discern the nuances that we, as predominantly English-speaking moderators, could not see.

We’re still underway with this process, but we hope that a glimpse of our internal process in this regard demonstrates how we tackle situations like this.

We still have internal problems, notably an automated handling of emails and DMs (to @mods) having to be relayed by few moderators with access to inboxes, which is then copy-pasted into our Discord server and, after consensus, a response copy-pasted back (from @mods). We still need to find an automated solution for this, but for now, this is how we approach collective responses.

More recently we also launched our official info site (https://info.tech.lgbt/) where we have documentation of our general instance information, moderation guidelines, instance rules, instance assets, and a reference for content warning acronyms. This site is incomplete but should serve as a solid basis as a public resource.

Internally, we use Notion as a knowledge base and document collaboration resource, where we can note down and share information between us on larger events on the Fediverse, helping each other keep track and stay up-to-date.

Other than that, we now have auxiliary channels on discord where we can each post things of note happening around the Fediverse, which allows us to become more situationally aware of Current Events and reduces the risk of a “split” of moderators making decisions without context of these events, or even context of ongoing threads. The latter is still a sore point as Discord does not show a full list of active threads at all times, so we try to stay active in pulling in moderators who are relevant to conversations, and reminding each other where certain discussions are going on. We still find this causes mistakes in awareness and is an area that needs to be improved. (Possibly with a bot that briefly mentions all moderators on every single thread to pull them into it and then deletes the message, but this requires more investigation.)

We now have a wide range of perspectives active in our moderation team, but this current situation shows to us that, even if we’ve seen improvements in this area, we still need to seek out an additional BIPOC Queer moderator; one who could better identify blind spots like these.


The second point raised comes with a criticism from our side. We took the points of rage.love as candid as we did, but in the process hostility was introduced at our reluctance to punish or expose the user in question.

For the purposes of openness, we have copied and formatted the entire email correspondence over this issue for everyone to read: https://pastebin.com/GDryZH0T

Our internal assessment was that this user was overzealous to the point of harassing in their reporting and we believed that, rather than punishing, we could educate them to get them to change their stance, and/or, at the very least, understand that the use of language in this way causes direct harm to people. We have taken this approach with our juvenile users before to help them understand their own actions.

With the benefit of hindsight, this was the wrong decision, as the language was inexcusably racist in nature.

We have held the belief that talking to users before enacting a punishment on a first strike can be successful in making them realize their faults and prevent future misgivings. To that degree, we don’t expect our users to be perfect, but we expect them to be able to acknowledge and work towards changing problematic behaviour. The real error in our actions here is that this tolerance cannot be given with this kind of harassment, even when done through the private nature of reports. It is through this clearer understanding that we have since appropriately suspended the user.

Previously, in interactions with moderators of remote instances, we had the inclination that the content of a report is more important than the delivery, as long as it was not too overtly offensive. However, in this case we failed to recognize that a user who uses harassing language directly in a report is likely to repeat such language outside of the content of a report. In order to rehabilitate successfully, we needed to send a strong message that this kind of behaviour was not okay, and instead, we were seen as having taken the side of the user.

This was a rare instance where we had offensive remarks from one of our users in a report they made. Our lack of experience with this, combined with the number of valid reports shared by that user, and the lack of reports against them, tempered our response.

As an aside, our reluctance to expose this user during the correspondence was so it wouldn’t incite retaliatory harassment upon our user. In general, we do not wish to incite harassment nor have it result primarily from our actions (such as this case), because it does not help constructively in avoiding the “cycle of harassment”, as mentioned before. We believe that through remediation, we learn how this manifests, so we can better ourselves and end the cycle. We wish to stop or stall it wherever possible, to let people heal and understand how their actions affect others. Our duties to our users as moderators are to protect them from such harm, together with the above breaking of the cycle, though this is rescinded once they severely break the rules of our instance, and can no longer be considered our user.

Going forward, this incident has made us aware of this blindspot, and we will be amending our guidelines to include criteria that reporter language will be moderated as strongly as reported posts, for the few cases this has been applicable, and the few (we hope) it will be.


To the third point, referring to the situation around thebad.space and @shadowjonathan’s post; we also concur that they (@shadowjonathan) have already apologized, but, due to the sensitive nature of the subject and concurrent with the fact that the mod team is internally without consensus on how to publicly proceed about this issue (and situation in its entirety), we can’t comment any further on this matter in a shared mod team response. What we can say for now is that conflict escalation to the point of harassment, name-calling, and dogpiling, does not result in communities finding a healthy resolution, and we want to avoid these as much as possible when we do provide a response.


To the fourth point, about a “known harasser”: Before/concurrently while responding to these points, we had identified a user that matched this description, coming from a report originated by a separate third-party instance. We then found a lot of other troubling information about this user and promptly suspended them. We have reached out to confirm this was indeed the same user in question, but have yet to receive a response as of the writing of this message.


A few last notes to hopefully help understand the position we are in, as a group of moderators.

We are humans and we make mistakes, so the best we can do is try to construct additional layers to the Swiss cheese model that helps us catch these mistakes. But, even then, none of us are completely perfect in that. We hope that the majority of this year without incident, and the happiness from our users at the vibes we create in our instance, can attest to this in our stead.

Doing moderation publicly is emotionally draining. Previous moderation conduct before we created @mods@tech.lgbt was built around a position that we withheld receipts and reasoning, as we would spend one ounce of energy to moderate, then ten more justifying to every user, trying to explain a complicated context of every hard decision while bad-faith users outside of our circle would interrogate our every word, misconstrue our intentions, and use that as evidence against us. We’d rather preserve our energy for actual moderation, (the need for which has steadily increased with the growth of the Fediverse as a whole in the past year, and we do not want to burn ourselves out), thus the rather closed-off nature of our moderation practices.

We recognize this has caused opaqueness in our process, with no way for users to view, disseminate, or verify how what process changes we’ve made these last several months. As those results are rather immaterial and systemic, they are not easy to point to.

We still are not ready to switch to a more public moderation practice. We have made steps to do so, but these have partially stalled due to demand, uncertainty, and further affirmations that showcasing our actions to the public will either require us to correct every single poorly worded sentence, or carefully scrutinize our wording in public statements such as this one, so our detractors have no basis for misinterpretations, all of which saps our energy away.

We wish to be more humane and approachable in our moderation, but, paradoxically, we’ve found that this means we should remain silent, which is unfortunate, and we want to change, if that is even possible.

Aside from that, as mentioned before, sometimes we have to make hard calls as moderators that not everyone will agree with, where we must begin weighing actions to see which causes the least amount of harm in the short and long term. As much as we try to prevent ourselves from entering these scenarios, sometimes we find ourselves in them regardless; and these decisions are what end up being subject to the most criticisms. Decisions that we cannot answer without another person having an existing perspective of a moderator to understand, and trying to make others understand them costs energy and time we don’t have.


Throughout all of this, we wish that the things we shared here can show that the last few days have been hard on us, and that we make mistakes. We’re sorry for these mistakes, and we’re sorry that we have let them languish in many aspects, causing hurt and uncertainty. We wish to show that we want to improve and learn from these mistakes, and that we are trying our best.

tags: