10 months ago
jupiter_rowland@hub.netzgemeinde.eu
Some of you may know, some may not, but I write extremely long image descriptions. My records are over 60,000 characters for one image and over 76,000 characters altogether for three images. They have to be that long, yes.

By the way, they go directly into the post. I write separate descriptions for alt-text.

(1/13)

#FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
10 months ago
jupiter_rowland@hub.netzgemeinde.eu
Still, my intended target audience is very widespread, regardless of whether I'm posting about OpenSim, or whether I'm posting about the Fediverse.

I mean, sure, I could write my OpenSim posts in such a way that only those who know OpenSim inside-out because they're active users understand them. It'd be easier for me.

But there are probably fewer than 50 registered users in the Fediverse who know more about OpenSim than that it exists, if even that. Of these, maybe half a dozen is active. That wouldn't exactly be a big target audience.

At the same time, there are lots of people out there who receive my posts, too, be it by following me, be it by following one of the hashtags I use, be it by discovering one of my posts on their federated timeline.

Now, these people may pretty well have their curiosity piqued upon receiving one of my posts with an image from within a virtual world, the kind of which they would never have imagined exists since the Metaverse hype ended.

At the same time, the post doesn't tell them anything about this kind of world. Nothing is explained, and nothing is described, or if something is described, it's the very bare minimum requirement for the most basic accessibility. Even if they're sighted, they'd really like to know what kind of place this picture is from, and what the picture actually shows. There may be items in the picture that they've never seen before.

It's even worse if they're blind. Sure, you may say that blind or visually-impaired people have no use for virtual worlds, thus, literally not a single one of them can even be interested in that topic.

I wouldn't count on that. I'd say a blind or visually-impaired Fediverse user may just as well be curious about virtual worlds as a fully sighted user, including what everything looks like. If they are, they've got the same right to learn about it as a fully sighted user. Everything else would be ableist.

Both need my help. No-one else can really help them. No, not Google either. Not on this level of detail, and I'm not even talking about the first hits for OpenSim probably leading them to the human body simulation of the same name.

Both need explanations. And the blind or visually-impaired user needs visual descriptions. Only I can provide them on a sufficiently detailed and sufficiently accurate level.

Not providing sufficient description and explanation for image posts on this level of obscurity is about on the same level as not providing any image description at all whatsoever for images in general. Hubzilla doesn't care. Mastodon, on the other hand, calls that ableist. And Mastodon calls it out.

My target audience for posts about the Fediverse, including memes, is even wider. It's basically everyone, especially my target audience for Fediverse memes.

But not everyone is on the same level of knowledge. And if an image is included which is almost always the case in my meme posts, I have to describe the image anyway.

I guess you've read that shared post by @Stormgren. Image descriptions are not only good for blind or visually-impaired people, but they're also good to help sighted people understand an image which they wouldn't understand without the description.

So I describe and explain the meme image.

But explanations don't belong into alt-text. I've already written about that.

So I describe the meme image in the alt-text, and I explain it in the post where explanations belong.

Sure, I could say, "If they don't get it, they don't get it. I don't care." But doing so would make these posts inaccessible to the vast majority of their intended target audience.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
10 months ago
scott@loves.tech
@Jupiter Rowland
I guess you've read that shared post by @Stormgren. Image descriptions are not only good for blind or visually-impaired people, but they're also good to help sighted people understand an image which they wouldn't understand without the description.

That was actually a point I made months ago. Accessibility isn't just about blind people. I am not the only person who believes that. You keep telling me that I am wrong, but you sometimes repeat what I told you months ago, perhaps because someone else said the same thing.

I am not saying that you should ignore people's needs. In fact, I am saying the opposite of that.

What am saying is that you prioritize blind and sighted people who are interested in what you talk about over people who don't care about what you talk about.

Blind people, deaf people, people who have various sensitives, etc. are all part of your target audience. But only those who are actually interested in the topics you write about.

Prioritize these people:

"I read your posts and wish you would make these changes so it is easier for me to consume."

And deprioritize these people:

"I have no interest in your topic, and don't even read your posts, but you violated this rule that I arbitrarily made up, and I will force you to change even if your actual followers don't even want it."

That is all that I am saying.
10 months ago
jupiter_rowland@hub.netzgemeinde.eu
In an ideal Fediverse, I'd provide at least four to six different descriptions per original image so that as many people get a description that's as close to what they need as possible. And I'd link to them, even though that makes things more inconvenient for reasons I've already explained, and hope I'll get away with it.

Of course, this isn't feasible.

Still, I do have to find a way to satisfy as many people on Mastodon specifically as possible. Even if they aren't interested in what I write about, they may still cut into my reach.

It's incredibly difficult to get away with sub-standard accessibility for images on Mastodon. You can quickly be ostracised for not providing an alt-text as well as for providing an alt-text, but it isn't useful enough because it carries too little information about the image.

This starts at being lectured about having to provide good alt-text. It continues via refusals to boost posts with undescribed images. And it ends at people who post images without describing them being muted or even blocked. If your only connection on a particular instance blocks you, you also disappear from the instance's federated timeline, and your posts may no longer be delivered to those on that instance who follow a hashtag that you use.

Besides, reputation matters. If you don't take accessibility by Mastodon's standard seriously enough, and your posts appear on Mastodon often enough, you're likely to earn a bad rep as someone who's lazy and careless and basically ableist. Even useless alt-text is ableist by Mastodon's standards.

I already have a very active member of Mastodon's alt-text police among my connections, so I can't neglect this.

On the other hand, I don't think you'll get a bad rep for trying too hard, for doing over 1,000% of what's required and still trying to improve and optimise and max out your accessibility game.

What I can say is that I'm not constantly being scolded for giving too long and too detailed image descriptions, nor am I scolded for parking them in the post. I guess my alt-texts already keep the alt-text police satisfied for now, and they should still know that if my alt-texts aren't sufficient, there's still the long description.

The
"I have no interest in your topic, and don't even read your posts, but you violated this rule that I arbitrarily made up, and I will force you to change even if your actual followers don't even want it."
people may still cut into my reach, so I can't ignore them. Besides, they don't enforce rules which they themselves have just pulled out of thin air right there right then. They enforce rules which the greater Mastodon community has already firmly established, and which Mastodon is expected to live by. And when I say "Mastodon", I mean the typical Mastodon user's perception of Mastodon which includes everything that happens on Mastodon, regardless of where it comes from.

You aren't exempt from these rules just because you're on Hubzilla. Besides, Mastodon users can't see where you are anyway. So they'll assume you're on Mastodon unless they know better.

I'd say my current ways of describing and explaining my in-world images have been working quite well. I usually don't get any feedback, but this also means I don't receive any negative criticism about neglected accessibility.

If I were to do any major changes such as moving the full descriptions into linked external documents, it'd be a gamble.

For one, the technical side is largely untested. All I know is that a blind user once told me that a Hubzilla Article didn't work with her screen reader, so Hubzilla Articles may actually not be accessible at all. I'm not going to post the majority of my future images on Hubzilla anyway, only on (streams) which can make Mastodon blank them out, and (streams) doesn't have the Articles app.

So all I could do would be a simple HTML document uploaded to the Files app that contains the full description, maybe with the described image embedded as you've suggested. However, I don't know how which browser on which OS handles an HTML document dealt to it not by a Web server, but by a file server. Will it open and display the document, or will it download the document to the device like any other file without opening it?

Besides, what else is untested is the acceptance of linked image descriptions, especially when they're the only available source of transcripts of the bits of text in an image. After all, this means that something that's defined as absolutely mandatory, the transcripts, is not available in the post at all, only in some external document that requires extra effort and, on mobile devices, an extra app to open.

Externally linked descriptions may improve the acceptance because they drastically shorten the image posts, and they make image posts with multiple images within the post text possible in the first place.

But they may just as well have the opposite effect because they remove a detailed image description from the post as well as any and all explanations and, worst of all, every last text transcript, and instead, they park everything someplace that requires extra effort and, on phones, letting another app open to access them.

Yes, Mastodon doesn't like long posts. Some Mastodon users actually block everyone whom they catch posting over 500 characters upon first strike. But this can be partially mitigated with a long-post content warning.

And there are way more complaints about missing or useless image descriptions than about too long posts. So I dare say the former cut into your reach more deeply than the latter.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
Sorry, you have got no notifications at the moment...