Is there a Killer App for Accessibility?


This post speculates about alternative changed futures for accessibility, such as cost-busting open source developments; self-voicing interactions; over riding inaaccessibilityty by proxy web servers; a screenless, voiced, menu-driven PDA; and higher level software design practices.

An mp3 Youtube converter converted me!

First, I digress to tell you about a cool utility that invoked the serendipity behind this posting. Blind Cool Tech has a podcast, Jan. 1 2008, on a “You tube to iPod converter”. I haven’t used Youtube.com much since the videos appear to my partial sight as white blobs with some hand waving going on. Last week, I began to rethink my intellectual aversion to mindless drivel I feared populated Youtube and affronted my blindness sensibilities. The NYTimes had a piece on “Big Think”, a Youtube for eggheads that promised a variety of magazine-style videos of the ilk that interested me, namely politics and economics, reminiscent of the university-based video series at research


Wow, this little piece of software Youtube to iPod converter really delivers and opened up a new way for me to get useful web information. The use case is: copy the URL for a video that interests you, the link you would click to invoke the viewer; paste the link into the accessible converter; choose a file name and location; choose the format type mp3; click “download and convert”; wait a while; listen to the mp3 or your PC or send it on to a digital player, in my case my Bookport from aph.org. With a bit of imagination and patience, you can mentally fill in the video and also have a version to replay or bookmark. Moral of this digression: once again podcasts from the blind community open new worlds for us new vision losers needing accessible software to stay in the mainstream. Thank you, blind cool tech podcaster Brandon Heinrich! Check out my page of Youtube converted videos on eyesight-related topics.

Youtube video on WebAnywhere Reader

By sheer luck, the first You Tube search I chose was the term “screen reader” and it turned up a provocative demo and discussion:

University of Washington Research: Screen Reader in a Browser by Professor Richard Ladner and graduate student Jeffrey P Bigham in the Web Insight project at cs.washingting .edu

Briefly, this experimental work addresses the problems of costly screen readers and the need for on-the-fly retrieval of web information by blind users away from their familiar screen readers. The proposed solution is a browser adaptation adding a script that redirects web pages to a so-called proxy server that converts the structure of the page, known as its document object, to text and descriptions that are returned to the browser as speech. This is pretty much what a desktop screen reader does, only now the reader and speech functions are remote. Of course, there are a gazillion problems and limits to this architecture but it appears to work sufficiently reliably and rapidly to achieve the social goals of its name, “Web Anywhere”. This research project, funded by the National Science Foundation, has also used the above architecture to modify web pages to add ALT tags from link texts, OCR of the image, and social networking tagging of images. Not only is the technology very clever, but also the work is based on observations of how blind users use the web and on a growing appreciation of the complexity and often atrocious design of web pages and use of AJAX technology that frustrate visually impaired web users, no matter the power of their screen readers or magnifiers or their skills.


As a former employee of funding agency NSF, a reviewer of dozens of proposals, a Principal Investigator in my sighted days on Computer Security education using animation, let me tell you this U. Washington project is a great investment of taxpayer funds. The work is innovative, well portrayed for outreach at at webinsight.cs.washington.edu, addressing monumentally important global and social issues, and helping to bring about a better educated and motivated generation of developers and technology advocates on accessibility issues.
Now, is this proxy-based architecture the killer app for web accessibility? Possibly, with widespread support of IT departments and developers, but the project sets it goals more modestly as “Web Everywhere” for transient web uses and possibly more broadly to address the cost of current screen reader solutions. Maybe the proxy-based approach can be expanded to other uses in demonstrations and experiments on a range of accessibility problems.

Will free screen readers shake up the rehab industrial world? My pick is NVDA

In one sense, a no-cost screen reader provides a way of breaking up the current market hierarchy, which one might unfortunately describe as a cartel of disability vendors and service providers. Yes, the premier screen readers sell for $1000 which seems justifiable by the relatively small market, the few million U.S. and international English-speaking PC users who are blind and on the rehab grid. Some, like Blind Confidential blogger, blink, and industry insider suggest the assistive technology industry is doing fine financially, able to afford more R&D and QA, and attractive to foreign investors. Like any segment of the computer industry, buyers become comfortable with the licensing, personalities, training, upgrade policies, and help lines so therefore resist change. In the case of the $1k products, buyers are more likely not individuals but rather rehabilitation and disability organizations with a mandate to provide user support through a chain of trained technical, health, and pedagogical professionals. A screen reader like the NVDA nonVisual Desktop Access from NVACCESS.org will challenge this industry segment as more users find it suitable for their needs, as I have written about in“Look ma, no screens! NVDA is my reader” posting . With broader acceptance of open source as a reliable and effective mode of software enterprise, as nvda co-develops with other flexible open source office and browser products, as energetic developers fan out to other accessibility projects, well, nvda might well be the killer app of cost and evolution.

Should apps depend on screen readers or be self-voicing?

However, in a more radical sense, I argue that the screen reader model itself is badly flawed and that also technical accessibility alone is inadequate to resolve the needs of blind web users.


The value of a universal screen reader is that it can do something useful for most applications by dredging out fundamental information flowing through the operating system about an application’s controls and its users’ actions. But another model of software is so-called “self voicing” where the application maintains a focus system that tracks the user’s actions and provides its own reactions through a “speech channel”, providing at least equivalent information to an external screen reader. Such a model can do even better by providing flexible information about the context of a user event and preferences. A button might respond upon focus with “Delete”, or “Delete the marked podcasts in the table”, or repeat the relevant section of the user manual, or elaborate a description of the use case, such as “first, mark the podcasts to delete, and here’s how to mark, then press this button, and confirm the deletions, after which the podcast files will be off your disk unless you download them by another name”. Self-voicing as speech technology is implemented by many applications that allow choice of voice, setting speed, and even variation of voices matched to uses, e.g. the original message in an e-mail reply. More significantly, self-voicing puts the responsibility for usability of the application directly on a developer to provide consistent, coherent, and useful explanations of each possible user interaction. Further, this information is useful both to the end user and to testing professionals who can check that the operation is doing what it says, only what it should, and in the proper context of the application’s use cases. Ditto, a tech writer working with a developer can make an application far more usable and maintainable in the long run. So, we claim, that a kind of killer app development practice would be the shift of responsibility away from screen readers onto self-voicing applications, including operating systems, where development processes will be improved. We base our claims on personal experience developing a self-voicing podcatcher, @Podder, for partially sighted users using a speech channel of copying text to the clipboard to be read by external text-to-speech applications. Another self-voicing application is Kurzweil 1000 for scanning and document management, and employing the nicest spell checker around.

Can overcoming missing and muddled use cases conquer inaccessibility?

We have argued in “Are missing, muddled use cases the cause of web inaccessibility?” posting that the main culprit in web usability is not technical accessibility but the way use cases are represented, tangled, and obscured by links as well as graphics and widgets on web pages. A use case describes a sequence of actions performed to meet a specific goal, such as “register on a web site” or “archive e-mail messages”. Use cases not only lay out actions but also provide the rationale, the consequences, constraints, and error recovery procedures for interactions. Our claim is that software developers, both desktop and web application developers, force all users, sighted or blind, to infer the use cases from the page contents and layouts, often embellished with links, such as blog rolls, to enhance social interaction and increase search engine rankings. Reports such as those from the Web Insight project and the Nielsen Norman report “Beyond ALT text” describe in gory detail the frustrations and failures of visually impaired users struggling with their screen readers and magnifiers and braille displays to overcome the practice of poor use case representation as they try to keep up with sighted users in gaining information from and performing consumerism within the constellation of current web sites. While I certainly believe that web accessibility activists are important to removing barriers and biases, the larger improvement will come when web sites are designed and clearly presented to achieve their use cases, for the benefit of all those who gain from better web site usage. This is already occurring with re-engineering for mobile devices where failure to activate a use case or have available the appropriate use case is especially apparent, and, seemingly, not really that hard to achieve.

How will mobile devices improve accessibility?

Finally, what about the marvelous mobile devices such as the fully voiced, menu-driven Levelstar Icon and APH Braille Plus Mobile Manager? After 8 months of Icon addiction, I firmly believe that, cost aside, this form of computer is far superior to conventional Internet usage for the activities it supports, mainly e-mail, RSS management, browsing, and access to Bookshare.org resources. for example, I can consume the news I want in about an hour from NY Times, Washington Post, Wall Street Journal, Arizona Republic, CNN, InsiderHigherEd, CNET, and a host of blogs. And that’s BEFORE getting up in the morning. No more waiting for web pages to load on a news web site, browsing through categories on information that don’t interest me, and bypassing advertisements. Additionally, I am surprised at how often I use the Icon’s “Mighty Mo” embedded browser by wireless rather than open up the laptop to bring up Firefox and fend off all my update anxious packages and firewall warnings. Yes, life with the Icon is “living big”. the Icon is mainly part of the trend toward phones and wireless devices, but just happens to be developed by people who know what visually impaired users need and want.


Maybe, somewhere out there is a wondrous software package that will dramatically boos the productivity and comfort of visually impaired computer users. With some assurance, we can recognize an upcoming generation of open source oriented developers seasoned by traditional assistive technology and adept at both project organization and current software tools. Funders and support organizations can look ahead to utilization of their innovations and improvements. But maybe the core problem is much harder, as we claim, a disconnect in “computational thinking” between software designers who have found their way through models and user-oriented analysis and those web designers stuck at the token and speechless GUI level of browsers and web pages. Empirical researchers on accessibility are starting to witness and understand the fragility of users caught between artifacts designed for sighted users and clumsy, superhuman emulating tools such as screen readers and magnifiers while the proper responsibility for accessibility falls on developers who have yet to appreciate the power of readily available speech channels along side graphical user interfaces.


What do others think? Is their a “killer app” for accessibility? Comment on this blog at https://asyourworldchanges.wordpress.com, “As Your World Changes” blog or e-mail to slger123@gmail.com.

Advertisements

Tags: , , , ,

2 Responses to “Is there a Killer App for Accessibility?”

  1. slger Says:

    Update on WebAnywhere

    An alpha release is available at webAnywhere at http://webanywhere.washington.edu .

    This software promises significant increases in accessibility of web pages from public venues, such as libraries. The demo and initial version are, indeed, looking good.

    Of course, there must eventually be sufficient server infrastructure to support the remote reading of web pages for potentially thousands of simultaneous users. However, scalability of server architectures, as well as the limited population of users, can overcome any barrier to wider spread use.

    Current screen reader users will easily adapt to WebAnywhere, as one more option to accompany a more familiar portable general package, such as NVDA. The biggest barrier for us are context switching to alternate tables of keystrokes and interaction protocols.

    webanywhere can be a killer app for public education about assistive technology. For free, it can show how visually impaired people make productive use of web content. One unfamiliarity for sighted users is the reliance on keyboard rather than mouse, using audio rather visual information pathways. Also problematic for many people is “synthetic voice shock”, the initial repellant sense of harsh robotic voices, and the lack of confidence in ability to hear through the voices into the web content.

    And, isn’t it great that more web page designers will no longer have an excuse for not testing their pages for usability by persons requiring assistance? Ha, ha!

    All this said, I still have to learn how to shut off the voices as I just got 4 speaking things going at one time on the above page. Yikes!

    Great job, web insight project at U. Washington! It’s now up to the rest of us to continue your technology adoption process and to step in as public educators about assistive technology and web accessibility.

  2. slger Says:

    Here’s a review of Web Anywhere , explaining more about its functionality and potential users.

    I am concerned that ‘screen reader’ is an inappropriate technological description for this valuable Web Anywhere experiment.

    Firstly, the term suggests more applicability than just reading web pages. For example, Web Anywhere doesn’t read any buttons on a browser as does a conventional ‘screen reader’.

    Secondly, it is actually providing an alternative voice rendering of the HTML elements within a browser, not reading from on, or under, the screen.

    Not that these comments should detract from the value of the product, I just like the term ‘web page reader’ better.

    Thirdly, that term is potentially more meaningful to Vision Losers who don’t know, or care, yet about screen readers. They just want some way to read a web page in a browser that may be out of their personal control.

    Susan

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: