Finding useful results from a web search is hard enough, but what if the page returned makes extensive use of colour to convey information and you are colour blind, blind or low vision? How do you know this in advance, or if the information in colour will be accessible, short of visiting the page? What if you are deaf or hard of hearing and the page relies heavily on audio? How do you know if signing, captions, transcripts or descriptions are available without checking?
Currently there isn’t an effective way to refine the discovery of content based on user needs and preferences, but that is what the a11y metadata project seeks to address: how to discover the nature of resources on the web and how to filter to ones that are particularly suited to individual user needs, or that provide useful equivalents.
Simple search discovery mechanisms are not enough. Looking at the markup for a page will indicate if images are used, but it won’t tell a search engine if those images rely on colour or the author has written text into them, for example.
Creating an accurate picture of accessibility of web resources through metadata is the ultimate goal of this project. We’re currently looking at adapting and enhancing the metadata work already done by Access for All for submission as new properties to schema.org. If you’re interested, I encourage you to check out the web site.