Users who cannot see won’t, obviously, benefit from the visual styling that most of CSS enables. For these users, what matters is not the drop shadows or rounded corners, but the actual textual content of the page—which must be rendered audibly if they are to understand it. The blind are not the only user demographic that can benefit from aural rendering of web content. A user agent embedded in a car, for example, might use aural styles to enliven the reading of web content such as driving directions or even the driver’s email.
In order to meet the needs of these users, CSS2 introduced a section
describing aural styles. As of this writing, there are two user
agents that support, at least to some degree, aural styles:
Emacspeak and Fonix SpeakThis. In
spite of this, CSS2.1 effectively deprecates the media type
aural and all of the
properties associated with it. The current specification includes a
note to the effect that future versions of CSS are likely to use the
speech to represent
spoken renderings of documents, but it does not describe any details.
Due to this odd confluence of emerging implementation and
deprecation, we will only briefly look at the properties of
aural style sheets.
At the most basic
level, you must determine whether a given element’s
content should be rendered aurally at all. In
aural style sheets, this is handled with the