Thinking of management with upcoming support in I am spending time on .

On an episode of the StackOverflow podcast I learned about (another episode from this month recommended another blocking tool that looks at favs on a tweet to determine the accounts to block).

Being an ongoing discussion I think, filtering like on Mastodon might be a sensible approach.

I want to hear other opinions.

Especially from people identifying not as male and people of colour, content moderation is important because we have way too much harassment online.

Now is a good time because we haven't implemented much yet. (Only non-federated issues, where moderation can take place on an instance level).

The current state is writing down use cases and exploring approaches using penpot.

Let me know (happily via DM if you prefer) if you are interested!

@RyunoKi not sure what you want to know, but I feel that instances blocking harassing instances already makes the Fediverse a relatively safe space by design.. and that's why it took so much time for me to pick an instance, which feels safe and has a good grade of "maintenance" regarding their block list.

Blocking by favs may not be as useful as on Twitter, since favs are used less.

The most annoying thing to me is with its tons of unaware users..

@RyunoKi filtering is and always was tricky, on Twitter as well as here since it's difficult to filter only specific contents..

@RyunoKi I've learned a lot about content moderation with Discourse. It is a great source of inspiration, although Discourse itself is not federated.

I started using discourse five years ago and participated under all possible roles (newcomer, moderator, admin, developer of new features, trainer, bug fixer). With tiny communities as well as a large number of participants (over 50,000).

You won't waste your time spending hours to figure out how it works in detail 🙂

@dachary @RyunoKi

I took brainstorming notes both on #Lemmy and on #SocialHub on two concepts:

- Federated #moderation: Making moderation both personal and instance-level more native part of the fedi, which may increase #inclusion while protecting #privacy

- Delegated moderation: Outsource moderation tasks to people known to be good moderators. For code projects this is interesting, as we really talk about #maintenance tasks here.

Appreciate response to the topics:

@humanetech @RyunoKi I spent a lot of time working and thinking about moderation in two different contexts, both of which are not (yet) federated:

* Forums based on Discourse
* Forges based on GitLab & Gitea

And both GitLab & Gitea are years away from what Discourse provides regarding moderation. If you're unconvinced, read the blog post regarding trust levels.

Before considering moderation in a federated environment, there is a lot of catching up to do 😅

@dachary @RyunoKi

Sure is. I was just mentioning so you can take it as input.. feedback collection :)

@RyunoKi @dachary

I added a comment to the Lemmy topic referring to this thread, and explicitly mentioning the interesting idea where moderation comes into the realms of Software Project Governance.

@humanetech @dachary @RyunoKi I think federated moderation is risky. In my oninion if an instance is not capable of being moderated by the people who are on that instance then it should be shut down.

Remote moderators create moral hazard, since they are people (or AI systems, more likely) who have no real stake in the particular instance being moderated. Remote moderators could be outsourced, just like Facebook does, in a very unethical manner to ultra low-payed workers who are constantly viewing traumatic content. Do we really want to be doing that? (this is more of a rhetorical question)

If federated moderation exists then within certain regimes the government will insist that it have moderator status on your instance. Imagine you are someone in Russia running an anti-war instance, or in China.

Also remote moderators may be initially friendly but later turn hostile. The potential damage which can be done by someone with no investment in your instance who has gone rogue is considerable.

@bob yes, valid concerns. Though I feel that those aspects can all be taken into account by the way that these functionalities work.

Note that what you discuss I call Delegated Moderation. And what I call Federated Moderation is more about turning moderation activity that now happens 'behind the curtains' in informal channels, into #ActivityPub msgs and have more transparency (but PII can be removed)

Btw. I had a prior discussion on the Lemmy topic here:

@humanetech @bob Formalising what already happens would be more viable.

So instead of following particular hashtags, when a moderation alert happens it could have its own activity type and be published into a moderation collection, maybe on the instance actor. Other instances could then subscribe to those collections, or not, and the alerts could be subject to the usual message scopes.

@bob This is a discussion that will never fully resolve itself, but which comes up all over the place. I suspect it depends on context and objectives, and those likely change over the evolution of a group (or fractal groups, with sub- and super-groups...)

Slashdot has always used an interesting meta-moderation model, for example, with the flipside being that it's actually a *rating* system - the individual reader can use the ratings to dynamically filter out posts as wanted.

@bob As such, I don't think you can "resolve" federated moderation at a simple level without some sort of understanding of the power balance you want to set in stone between *all* parties involved. Roughly thinking, that could include:

- individual instance reader (ie you as reader)
- collective of instance readers
- your instance admin(s)
- other instance admin(s)
- other instance posters
- third party moderator
- third party moderator group

Each of those will have their own interests/values.

@bob oops, missed @humanetech out of previous reply.

The questions become about who can provide some kind of 'truth' re suggested moderation rules, who can apply that and to who, and who can override that?

@humanetech @dachary @RyunoKi Outsourcing moderation sounds like centralising moderation though, which I thought is what we want to avoid.

@pixelcode @dachary @RyunoKi

No, it is not. You might delegate moderation tasks to any other fedizen or group you trust with those. It is not a random delegation either. If an admin wants to go on a vacation maybe they wanna delegate to another friendly admin for the time being. But there might also be a co-op for moderation to which I can delegate, and who ensure to uphold proper values in their manifesto. As admin I can still monitor their actions, and review them.

(Just brainstorm here)

@pixelcode @dachary @RyunoKi

The recent influx of Twitterers is another example where @Gargron was terribly busy scaling services and might have temporarily invoked the help of a moderation group.

But this is on instance level. On personal level is also interesting. If someone drops on my thread and half my following has already blocked that person.. I may receive a warning. Or, if that's too personal, I may get warning that that instance gets a lot of blocks, suspensions and abuse reports.

That's more along the lines I had in mind: utilising the social graph (and provide a way to group people and allow to share lists).

For example, I've read complaints that for new Mastodon instances you have to enter domains one by one if you go with the „known bad actors“ lists. Should be easier if there is no technical reason.
@pixelcode @dachary @Gargron

@humanetech @dachary @RyunoKi I'm wondering what Mastodon and other federated social media does to protect your privacy?

@anedroid @dachary @RyunoKi

Good question. I am not really sure. I gather that only instance staff (mods and admins) can see the details of reports, blocks, etc.

Sign in to participate in the conversation
Layer8 in Space

Welcome to the 8th Layer of Madness

Most topics are related to Linux, Anime, Music, Software and maaaany more