Alt_Text_Bot uses an API from CloudSight to help describe images submitted in tweets. Users simply need to mention @alt_text_bot in a tweet with an image (the tweet must be part of the image, not in a Twitter card or via a link) and Alt Text Bot will respond with a description.
It has some limitations. The biggest is the character limit within Twitter. Converting a chart to text, for example, is a great idea, but the character limit of Twitter precludes you from getting much value and descriptions can be truncated.
Another is probably from the CloudSight API. If an image is tweeted twice (such as a retweet), you might get two different descriptions (as this first one demonstrates, and then this second one). On top of this, not all images are very clear and context is hard to convey, as in this one showing wheelchair demonstrators in Seoul.
Regardless, given the current state of accessible images on Twitter, this tool is awesome. As I write this I see more and more people testing Alt Text Bot, so I expect that, even if this is just a proof of concept, more good things will come as a result.
The next image is me being excited about this, along with both descriptions that Alt Text Bot provided.
We are working on a similar project called Pictures for the Blind and Sighted, which enables blind and visually impaired people to experience and interact with pictures. The blind send us pictures and we upload them on our blog and get sighted people to describe them. Our aim is to connect blind and sighted photographers and creative writers throughout the world. We are based in Germany where we also host photography workshops for blind people. Please feel free to contact us, if you would like to get involved firstname.lastname@example.org https://photonarrations.wordpress.com