Thread refresh

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems
replies: #2 #12 #23 #25 #27 #29 #30 #32 #34

things i would like to see in mastodon that pleroma has been able to do for years:

- the ability to defederate an instance except for *explicitly approved* accounts (pleroma has supported this since the beginning of MRF in 2018)

- the ability to defederate a hashtag (pleroma has supported this since 2019)

- the ability to quarantine unknown instances until they are approved by the admin (pleroma has supported this through a combination of multiple features since 2019)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems
in reply to #1 - replies: #3

- the ability to forcibly mark posts with content warnings based on keywords in their content (2018)

- the ability to forcibly mark media as sensitive based on keywords in the post's content (2018)

- the ability to reject media selectively based on various admin-configured signals (2018)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems
in reply to #2 - replies: #4

- the ability to derank remote posts based on their sentiment (not in pleroma mainline, but has been an active area of research in the MRF subsystem community since 2018)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- the ability to remove content warnings that are useless, e.g. from actors which constantly shitpost using content warnings (2018)

- the ability to require signups to complete a CAPTCHA to reduce instance abuse (2019) (edit: finally implemented in mastodon in 2023 and only because it became a problem on mastodon.social, which people started defederating)

- the ability to limit what instances get sent posts from a local actor (for example: to limit the possible damage caused by spicy posts) (2019)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- the ability to personalize representations of local posts when pushed or fetched by remote servers, e.g. to mold them into being consistent with that remote server's policy (2020)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

look i'm just saying that the fediverse would be a better place if we had admin tools that were good

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- moderation tools that are designed to be composed (e.g. used in concert with other moderation tools as part of a larger solution) (2018)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- automatically detect and mark explicit content as sensitive using perceptual hashing (2021)

- automatically reject illegal content using perceptual hashing (2021)

- block content using DNSBLs (2021)

- block incoming messages with excessive links (2019)

- block incoming messages with excessive mentions (2019)

- force bot traffic to post unlisted (2020)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- automatic mitigation of server-DoSing hellthreads (2018)

- automatic rejection of messages which reference unwanted remote emojis (2023)

- automatic follow spam blocking (2019)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

- normalization of message HTML and mention/hashtag formatting (2019)

- normalization of inline mentions and hashtag presence (2019)

- filtering of activitystreams content types (2019)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

i could keep going, but i think i've made my point

Andrew Dunham @andrew@ottawa.place

@ariadne A hearty "yes please" +1 to all of these; as an instance admin, I would use all of them πŸ’―

gnu/m43 @Mae@is.badat.dev
@ariadne why is treehouse running masto then? Just because a migration would be too much effort?
Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@Mae two reasons:

1. AdminFE sucks

2. Migration is difficult

them prin Lu @luka@sonomu.club

@ariadne i really hope #gotosocial @gotosocial devs are watching this

Emelia πŸ‘ΈπŸ» @thisismissem@hachyderm.io

@ariadne do you know where I can find information on pleroma's perceptual hashing capabilities?

I've googled and looked through their source code but can't find anything yet, I did see there's a NSFW server, but couldn't find it's code..

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@thisismissem it’s just photodna

Lunar πŸ›Έ β™Ύ @lunarloony@dosgame.club

@ariadne - ability to sing its name like the theme to Daytona USA

Becca Cotton-Weinhold @rlcw@ecoevo.social

@ariadne what does this kind of personalization look like? Does it edit the content, or is it more on level of applying content warnings and alt text?

Chad :mstdn: @chad@mstdn.ca

@ariadne CAPTCHA is already a thing. Just needs to be enabled.

docs.joinmastodon.org/admin/op

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@chad wow glad to see they finally caught up with pleroma in 2019 in this ONE area.

4censord :nfp: @4censord@unfug.social

@ariadne at least the captcha thing is present in mainline mastodon

ראַף Χ“Χ’Χ¨ נאַר 🟣 @raf@babka.social

@ariadne

Are all these supported in akkoma as well?

affine @affine@yourwalls.today
@ariadne
>- the ability to quarantine unknown instances until they are approved by the admin (pleroma has supported this through a combination of multiple features since 2019)

Wait, what are these features? Being able to check what's trying to federate would be a godsend for me as an admin, but the best I've figured out was running an allowlist instance and manually adding things whenever I see a broken thread, and that gets old fast.
Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@affine you can automate the allowlisting based on searching the object database for missing references

halva is @halva@wetdry.world

@ariadne honestly id move to *oma but like

there's literally like one somewhat good looking client for it, and it's soapbox

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@halva there is a fork of soapbox done by some of the pleroma devs called mangane

Nicole Parsons @Npars01@mstdn.social

@ariadne

This would be very handy to thwart disinformation used in scams, genocides, election interference, or climate denial.

@lillian

@ariadne tbh, the openness of the MRF subsystem itself is worth a mention here. among the larger pleroma instances there's a culture of if you want some custom moderation feature, you can write a MRF policy yourself and just add it to your instance. I haven't seen anything like that among the mastodon instances (yes, there's patched codebases, but nothing intentionally made to be expandable)

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@lillian yep that's because i designed it that way. you can't enumerate all possible threats, so you need a framework that can be extended by anyone.

SpaceLifeForm @SpaceLifeForm@infosec.exchange

@ariadne

Interesting thread.

How does Glitch-soc stack up?

cc: @jerry @paco

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@SpaceLifeForm @jerry @paco equally deficient

Emelia πŸ‘ΈπŸ» @thisismissem@hachyderm.io

@ariadne rejecting content based on hashtag has been on my list to do for Mastodon, but needs this activitypub issue resolved: github.com/w3c/activitypub/iss

This was an early attempt at this in Mastodon: github.com/mastodon/mastodon/p

Just silently dropping the message may not be the best option, hence needing community consensus on best practices

FinchHaven @FinchHaven@infosec.exchange

@thisismissem

This is interesting, but raises several questions for me (not that I'm anyone) that I've not seen addressed

Maybe early days in the larger issue, or something

Is this seen as content rejected because it contains $Word only, or content rejected because $Word is used in conjunction with a hashtag?

Who curates the list of $Words?

Then, a further question because of "Create a global default text filter to help prevent the use of racist and abusive language (Issue #31182)"

What language will the "global default text filter" list be kept in?

What becomes of other languages, which can be equally problematic on their own or in translation?

cc @ariadne

Emelia πŸ‘ΈπŸ» @thisismissem@hachyderm.io

@FinchHaven @ariadne I think a lot of what's been said is based on the MRF capability, where (as I understand it) you run additional elixir code in your server, much like a plugin system.

MRF has been deemed dangerous by some because it allowed rewriting audience & text on posts & publishing Flag activities as Notes β€” i.e., you can use it to out people who report others & rewrite posts to make it look like someone has said something they haven't.

Basically, great power, great responsibility.

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@thisismissem @FinchHaven nothing stops me from patching mastodon to do these things.

Emelia πŸ‘ΈπŸ» @thisismissem@hachyderm.io

@ariadne @FinchHaven no, but it does create some barrier to misuse, which is arguably better than none.. it's all a balancing act.

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@thisismissem @FinchHaven there is zero difference between someone publishing a patch to mastodon which does this and an MRF policy.

Andrew Dunham @andrew@ottawa.place

@ariadne @thisismissem @FinchHaven Also, respectfully: it's not even particularly difficult to patch Mastodon to do this! I'm not going to draw the rest of the owl, but it took me about five minutes to find the `process_audience` function and `visibility_from_audience` functions under `ActivityPub::Activity::[...]`; between that and patching `Status.create`, I'm pretty sure I could figure out a way to make all non-public statuses from a specific instance, user, etc. be public instead of private. And I wouldn't call myself a particularly proficient Ruby developer, either.

In my personal opinion, the harm that is currently being caused by not having flexible, extensible, and powerful moderation tooling *vastly* outweighs the harm that could be caused by making it easier for instance admins to do something that they're already able to do.

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@andrew @thisismissem @FinchHaven yeah exactly.

sure, a moderation tooling framework can be abused to tamper with messages in a way that is anti-social. no doubt about it. moderation tooling has to be powerful in order to do its job.

but that reality does not justify the current *nothing* that mastodon developers are doing about anti-abuse since forever.

πŸ†˜Bill Cole πŸ‡ΊπŸ‡¦ @grumpybozo@toad.social

@ariadne @andrew @thisismissem @FinchHaven This smells like the chatter of ~25ya amongst email admins about whether content inspection was a step onto a slippery slope of admin abuses or whether it was unavoidable given spammer behavior.

I expect that eventually a similar outcome will result: better filtering giving admins excessive hypothetical power will become the norm, but only after the garbage gets really bad for everyone.

Emelia πŸ‘ΈπŸ» @thisismissem@hachyderm.io

@FinchHaven @ariadne so when it comes to filtering instance wide, it ends up being "whatever your admin decides so you need to 100% trust them not to do bad things with the power they have.

I think for Mastodon, if I were to implement content filtering at instance level, I'd want a public log of actions related to be available to users on the server.

But the drop message versus said a rejection isn't yet worked out at ActivityPub level yet, afaik

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@thisismissem @FinchHaven nothing about ActivityPub blocks anyone from adding a transparency log to MRF, or adding an MRF-like facility to Mastodon.

this is just absurd.

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@thisismissem @FinchHaven like, respectfully, i think you all need to focus on why people like @KimCrayton1 are using Mastodon with basically *zero* mitigations to deal with trivial abuse instead of whether or not MRF is too powerful for your liking.

when Kim is no longer being crapflooded with racial slurs and threats, things that could be *trivially mitigated with MRF policies*, then maybe you can talk shit about MRF.

xrvs @xarvos@outerheaven.club

@thisismissem @FinchHaven @ariadne you can have MRF policies publicly listed and if an instance tampers more than they promise, it creates a (dis)reputation

also i might be wrong but i think public log of applied actions would be trivial compared to MRF itself

Ariadne Conill 🐰:therian: @ariadne@treehouse.systems

@xarvos @thisismissem @FinchHaven yes, precisely. it is a non-issue in practice with 6 years of MRF.