Forgot to say that, but a Facebook wall is essentially a multiple-choices form nudging its users to incite ad-hoc researchers to ask more questions.


Of course, it would be ideal if "researchers" posted a caricatural, ideal-typical representation of their own beliefs, to make it easier for "respondants" to decide whether they will tell the software someone’s beliefs makes them angry, sad, happy, etc.


Kinda creepy.

Anyway: I like to take as much room as I need for content warnings, precising their scope, limits, and so on. They’re as valuable as the message itself, knowing that you can write about sensitive contents and respect other people’s boundaries.


I rarely just write "CW politics", but rather "CW impressions on private media (M6) reactions after the French 2022 legislative elections results".


Oh and I’ve just thought about it, but maybe if a Twitter thread starts with a one-line tweet beginning with "CW: ", then this content warning could be applied to the entire Mastodon thread. Just an idea, I’m not quite sure about it.

Like, on Twitter you can write "Twitter is toxic."


On the Fedi, or on Gemini, you can write "Social media provide a web UI and mobile apps to be ubiquitous and integrate their hidden social structures to our social construction of the sovereign reality of our ordinary life, i.e. (in this context) to our psychological incorporation of the social. This to change more efficiently the way we think, to erode our ability to criticize them, in thought or publicly, and thus to have more sway on our habits. Furthermore, Twitter as every social media profiles its users and uses their neuroses (fear of death, tribalism) to increase a feeling of danger and their fear of missing out ; this production of neuroses being a by-product of the interface itself and of users competing with everyone else to get their own attention (e.g. with QRTs and online harassment), they will try to foster these neuroses with their loved ones without even realizing it."


Slightly clearer, more convincing, and longer, isn’t it?


Talking about toxicity in a social context is literally a meme derived from these carefully studied social media interfaces. They can literally put billions in psychological or social applied/fundamental research and even conduct their own experiments, not even counting the constant user profiling here.


Profiling meaning, of course, using the user metadata (likes, favs, retweets, etc.) to build social graphs and user profiles, to optimize the user interfaces towards certain targets. This is literally sociology: this specific UI element is meant for socially/psychologically vulnerable users, this other one to keep influencers on board, and so forth.