Community well-being / moderation procedures

Ideas for future improvements

If the community membership grows and there’s more forum activity, then the moderation policy can be further improved by having a Well-being team, just as the SocialHub has too. This team should ideally have non-staff members in it, as well as staff. This way they can handle accusations against staff members together in a good and transparent fashion.

Another improvement to make is to have a system user on the forum that reports in the public feedback category about the outcome of moderation procedures. This because the decision that has been made is carried by the entire moderation team (in this case forum staff). Consensus was already reached. When reporting as a system user there’s less risk of other people holding a grudge against the reporter.

What SocialHub has for the Well-beig team that differs from here, is that the Moderation category is accessible to forum members in the highest trust levels. I think level 2 is specified for this, but it might be set higher. Advantage is that there can be more transparency, and more people giving their valued opinion and feedback. With a Well-being team in place there’s always the opportunity to handle very sensitive, delicate issues separately, because - as a Discourse group - they have a group message box as well.

Lastly Discourse offers the capability to set Policies for specific trust levels. They work such that a member MUST consent with the policy when they reach the trust level, or they’ll get reminders of the policy on every visit of the forum. So the policy could highlight the Code of Conduct and the Moderation Procedures and ask for consent on that, which is then tracked afterwards. People cannot say that they weren’t aware of the various procedures anymore.

1 Like

Having a moderation process in place, even at a very early stage, was essential. The alternative is to set up a moderation process when it is needed, i.e. when behaviors contrary to the Code of Conduct happen. But it is also the time when there exists a strong incentive to prevent the creation of a moderation process. This happened late 2020 in the CHATONS forum and it took months to finalize it.

The devil is in the details though and there is much more to moderation than what is described in the category description at the moment. I agree that it needs to be developed and this can be done after the community grows and when the burden of a more complicated process is balanced by the number of people involved.

This makes sense. The current moderation contact is mailto:contact@forgefriends.org but it is inconvenient when moderation becomes frequent. Creating a forum group which can be associated with this email would achieve what you propose. That’s also what was setup in the CHATONS forum.

I agree on all counts. There is another reason for having a category that is not public: moderation is ultimately about acting against a person. The rationale for the moderation must be transparent for other community members to question and audit the moderation team. But it must be removed after a period of time so that it is not a permanent stain on someone’s reputation. If the category was public it would spread over the internet and this would not be right.

Intersting! I’ve never been involved in communities where that turned out to be necessary. I would be really interested to read first hand testimonies of actual problems this was meant to solve.

The way the SocialHub Well-being team is defined is terse and the SocialHub forum is a space where there is no Code of Conduct. It is my understanding that this absence is a deliberate decision. No matter what people think about Code of Conduct, there is a consensus on the fact that there is a significant difference between communities that have them and communities that refuse them. I left SocialHub because the tone of a conversation I was engaged in became less than friendly. This was not the first time I observed such behavior and I don’t remember anyone from the well being team stepping in to defuse the tension. I’m mentioning this first hand experience to illustrate why I’m unconvinced SocialHub can be used as an example.

The CoC is packaged into a Policy as can be seen here:

On paper I really like the setup that the SocialHub uses. But any moderation policy and by extension the community health stands or falls by the participation and activity of the members tasked to carry them out. On SocialHub the procedures form a good basis, and the lack of follow-up make them a paper tiger.

1 Like

I entirely missed this, I stand corrected, thanks for the link. One additional clarification about this sentence:

Members of the Social Web Incubator CG agree to abide by the W3C Code of Ethics and Professional Conduct .

I assume Members of the Social Web Incubator CG mean the forum users?

I can comment on this because the moderation history is not transparent enough. This is why the proposed moderation process for forgefriends has specific transparency requirements. When I feel unsafe for the first time in a space I look for:

  • The Code of Conduct and if there is none I’ll quickly leave because I assume nobody cares
  • How it was enforced in the past and if I can’t find any I’ll quickly leave because I assume nobody cares

Of course everyone has a different process and mine is rather simplistic. In the case of a community that has recently been established with only a handful of participants, I won’t assume that the lack of past enforcement history means that nobody cares. Because it is more likely that the first moderation action did not happen yet. In reality I’m a little bit more flexible, but the bottom line is that I’m really quick to run away when tension builds :slight_smile:

Yes, I think here you are looking at points for improvement that never happened. Until recently the W3C SocialCG and SocialHub were acting in unison where the SocialCG was sort of in the lead and the SocialHub community was a companion to that. But not all SocialHub members signed up at W3C to the SocialCG (including me till now), so that sentence is inaccurate.

Yes, I think a public part of moderation, anonymised, is an improvement to what SocialHub currently has. On SocialHub you can only see moderation activity with the proper trust level. On the other hand (and I don’t know if this is well-enough explained) for any issue there’s always the well-being team that can be approached and who guarantee safe handling of the issue and intermediation.

I know from experience that handling these cases can be a very acid apple to bite. It is no fun at all and can be very intense and time-consuming. It requires a lot of social skills and tactfulness and always runs the risk that you offend some parties no matter how you deal with it. It is the unthankful part of community management that is inherent to it, unfortunately :sweat_smile:

I think the best way to defuse tension in a space is to have a mediation team. Its task is essential to observe all interactions and intervene when they sense tension is building up but before it becomes problematic. It is about preventing problematic situations where the moderation team is about fixing problematic situations.

In October 2020 a six month experimentation establishing a mediation team was conducted in the CHATONS community and a report published in March 2021 to summarize how it went and how it could keep going.

I would like to see something similar happen in the forgefriends community. Tension did build up a few times in the past year. It never got to the point where moderation would have been necessary. But these situations would definitely have been in scope for the mediation team.

And calling it a mediation team is highly confusing because people think of it as a moderation team. That was a mistake (even in French). A well being team would be a much better name (bien être in French). A key difference between the two (and that turned out to be a good idea) is that the well being team has no special permission. Its member are not moderators or administrators. They are just regular members of the forum and should not even wear a badge.

I feel that is not the best approach for several reasons:

  • The well-being team should be a vertical across the community that represents everyone. Staff have additional responsibilities and tasks which make them different ‘stakeholders’. All should be represented in a multi-stakeholder inclusive community.

  • If the well-being team only has the power to discuss and no other tools, and they have to kindly request staff to act on any issue, then it is a tootless tiger too. Many day-to-day menial moderation tasks can be immediately handled and when a staff member is part of the consensus-making of the well-being team that can happen directly. Whereas without staff on board the well-being team is depended on availability, readiness and willingness of staff to follow-up. There’s a hand-over interface. This while direct follow-up to well-being issues is most efficient for community health and satisfying to the people involved in the issue.

  • If there’s no staff onboard, then staff is the highest authority. The community is a hierarchical one. If there’s disagreement between staff and well-being team, then staff wins automatically even while they may be at fault. Staff may come to see well-being team as subordinate, a mere discussion body, especially when lotsa text is created there that staff would need to read to understand decisions.

The best formation of the well-being team would be a minimum of five people, and idk maybe a max. of nine people…

  • An uneven number helps with getting majority consensus when voting on a course of action.

  • Majority of team should be regular members (maybe at a certain trust level).

  • There should be at minimum 2 staff members, ideally one being admin.

The minimum of 2 staff members is very important. A well-being issue can be directed against a staff member and that member might be part of the well-being team. In that case the rule is that they do not participate in the well-being discussion and let the other team members come to a decision. How exactly they do that (e.g. have everyone on CC or not) is on a case-by-case basis, but they must be clear on the procedure they follow (e.g. “We will first discuss our approach privately without those directly involved, and then propose a way forward to solve the issue” could be a valid policy. Those directly involved may still object to that).

I agree. Though somewhere someone should be able to find who is in the well-being team. But that can be seen in the Groups menu.

Before replying I’d like to make sure I made myself clear. I advocate for two teams to exist:

  • Moderation team with power to enforce the Code of Conduct, tasked with fixing problems
  • Well being team with no power at all, tasked with preventing problems from happening

I feel that maybe you thought I was proposing a single team.

1 Like

Ah, thanks for that clarification. I was indeed on the wrong footing, thinking there’d be just one team. I still feel that only one team is needed for this. The other team exists implicitly by those staff members that are not in the well-being team. They should let the well-being team handle most stuff, but they are backup in case someone is unavailable or an issue that can be trivially handled (i.e. an unwarranted Akismet flag on a new member signup).

1 Like

Unfortunately all the material regarding the CHATONS well being team experiment is in French. Including the rationale for having it separate from the moderation team and the upside of their lack of power.

My main source of inspiration is the LibreOffice community, where I’ve had an excellent experience over the years. It boils down to something really simple: members of the well being team observe everything in the forum. They are also active members of the forum, only they carry the extra burden of reading all discussions, even when they are not specially interested in the topic. It is their opportunity to sense that tension is building up. When and if that happens they will step in to try to defuse the tension.

Since they have no power of any kind they help with their presence only. I know people who have a soothing effect on a group in real life, just when they stand and say nothing. And others who have a talent to choose the right words in a forum to defuse tension. Each person in the well being has its own way to step in and there is no manual for this. It’s subtle and varies a lot from person to person.

I can only speak from my own experience to provide an example of what it meant for me to step in. On one occasion two people started to argue about something. The tone changed, the number of messages between them and their frequency seemed to indicate that it could escalate. It was just the two of them arguing so I decided to take an interest in the topic they discussed. It took me a few hours to properly research it and I posted a comment, including information I gathered and that was related to the topic they were arguing about as well as asking questions. The discussion went on and they were still arguing but now there was three persons involved and I made sure each of my message was going in the direction of at least one side of the argument. I’m sure they saw right through me but they did not ignore me: there are worse things than having someone interested in a conversation to defuse tension :slight_smile:

In the end the conversation did not escalate and I will never know if my intervention was just a waste of time or useful. But this is the essence of prevention, isn’t it?