Beyond Universal Design – Through Multi-Sensory Representations

<The following recommendation was offered at the CyberLearning workshop addressed in the previous post on CyberLearning and Lifelong Learning and Accessibility. The post requires background in both accessibility and national funding policies and strategies.

This is NOT an official statement but rather a proposal for discussion. Please comment on the merits.

Motivation: CyberLearning must be Inclusive

To participate fully in CyberLearning, persons with disabilities must be able to apply their basic learning skills using assistive technology in the context of software, hardware, data, documentation,, and web resources. Trends toward increased use of visualizations both present difficulties and open new arenas for innovative applications of computational thinking.

Often, the software, hardware, and artifacts have not been engineered for these users, unforeseen uses, and integration with a changing world of assistive tools. Major losses result: persons with disabilities are excluded or must struggle; cyberlearning experiments do not include data from this population; and insights from the cognitive styles of diverse learners cannot contribute to the growth of understanding of cyberlearning.

Universal Design Goals

Universal design embodies a set of principles and engineering techniques for producing computational tools and real world environments for persons usually far different from the original designers. A broader design space is explored with different trade-offs using results from Science of Design (a previous CISE initiative). Computational thinking emphasizes abstraction to manage representations that lead to the core challenges for users with disabilities and different learning styles. For example, a person with vision loss may use an audio channel of information received by text to speech as opposed to a graphical interface for visual presentation of the same underlying information. The right underlying semantic representation will separate the basic information from its sensory-dependent representations, enabling a wider suite of tools and adaptations for different learners. This approach transcends universal design by tapping back into the learning styles and methods employed effectively by persons with many kinds of disabilities, which may then lead to improved representations for learners with various forms of computational and data literacy…

Beyond Universal Design as Research

beyond Universal Design” suggests that striving for universal design opens many research opportunities for understanding intermediate representations, abstraction mechanisms, and how people use these differently. This approach to CyberLearning interbreeds threads of NSF research: Science of design and computational thinking from CISE +human interaction (IRIS)+many programs of research on learning and assessment. +…

Essential Metadata Requirements

A practical first step is a system of meta-data that clearly indicates suitability of research software and associated artifacts for experimental and outreach uses. For example, a pedagogical software package designed to engage K-12 students in programming through informal learning might not be usable by people who cannot drag and drop objects on a screen. Annotations in this case may serve as warnings that could avoid exclusion of such students from group activities by offering other choices or advising advance preparation. Of course, the limitations may be superficial and easily addressed in some cases by better education of cyberlearning tool developers regarding standards and accessibility engineering.

Annotations also delimit the results of experiments using the pedagogical software, e.g. better describing the population of learners.

In the context of social fairness and practical legal remedies as laid out by the Department of Justice regarding the Amazon Kindle and other emerging technology, universities can take appropriate steps in their technology adoption planning and implementation.

Policies and Procedures to Ensure Suitable Software

For NSF, appropriate meta-data labeling then leads to planning and eventual changes in ways it manages its extensive base of software. Proposals may be asked to include meta-data for all software used in or produced by research. Operationally, this will require pro posers to become familiar with the standards and methods for engineering software for users employing adaptive tools. While in the short run, this remedial action may seem limiting, in the long run the advanced knowledge will produce better designed and more usable software. At the very least, unfortunate uses of unsuitable software may be avoided in outreach activities and experiments.
Clearly, NSF must devise a policy for managing unsuitable software, preferably within a 3 year time frame from inception of a meta-data labeling scheme.

Opportunities for Multi-Sensory Representation Research

Rather than viewing Suitable Software as a penalty system, NSF should find many new research programs and solicitation elements. For example, visual and on visual (e.g. using text-to–speech) or mouse version speech input representations can be compared for learning effectiveness. Since many persons with disabilities are high functioning in STEM, better understanding of how they operate may well lead to innovation representations.

Additionally, many representations taken for granted by scientists and engineers may not be as usable by a wider citizenry with varying degrees of technical literacy. For example, a pie chart instantly understandable by a sighted person may not hold much meaning for people who do not understand proportional representations and completely useless for a person without sight, yet be rendered informative by tactile manipulation or a chart explainer module.

Toward a Better, Inclusive Workforce

Workforce implications are multi-fold. First, a population of STEM tool developers better attuned to needs of persons with disabilities can improve cyberlearning for as much as 10% of the general population. Job creation and retention should improve for many of the estimated 70% unemployed and under-employed persons with disabilities, offering both better qualities of life and reduced lifetime costs of social security and other sustenance. There already exists an active corps of technologically adept persons with disabilities with strong domain knowledge and cultural understanding regarding communities of disabilities. The “curb cuts” principle also suggests that A.D.A. adaptations for persons with disabilities offer many unforeseen, but tacitly appreciated, benefits for a much wider population and at reasonable cost. NSF can reach out to take advantage of active developers with disabilities to educate its own as well as the STEM education and development worlds.

Summary of recommendation

  1. NSF adopt a meta-data scheme that labels cyberlearning research products as suitable or different abilities, with emphasis on the current state of assistive technology and adaptive methods employed by persons with disabilities.

  2. NSF engage its communities in learning necessary science and engineering for learning by persons with disabilities, e.g. using web standards and perhaps New cyberlearning tools developed for this purpose.

  3. NSF develop a policy for managing suitability of software, hardware, and associated artifacts in accordance with civil rights directives to universities and general principles of fairness.

  4. NSF establish programs to encourage innovation in addressing problems of unsuitable software and opportunities to create multiple representations using insights derived from limitations as of software as well as studies of high performing learners with disabilities.

  5. NSF work with disability representing organizations to identify explicit job opportunities and scholarships for developers specializing in cyberlearning tools and education of the cyberlearning education and development workforce.

Note: this group may possibly be
National Center on Technology Innovation

What if Accessibility had a Capability Maturity Model?

The field of software engineering made notable strides in the 1990s when the Department of Defense promulgated via its contracting operations a Capability Maturity Model supported by a Software Engineering Center (*SEI) at Carnegie-Mellon University. Arguably, the model and resulting forces were more belief-based than experimentally validated, but “process improvement through measurement” became a motivating mantra. For more detail see the over-edited Wikipedia article on CMM.

This post is aimed at accessibility researchers and at managers and developers of products with an accessibility requirement, explicitly or not. Visually impaired readers of this post may find some ammunition for accessibility complaints and for advice to organizations they work with.

The 5 Levels of Maturity Model

Here are my interpretations of the 5 levels of capability maturity focused on web accessibility features:

Chaotic, Undefined. Level 1

Each web designer followed his or her own criteria for good web pages, with no specific institutional target for accessibility. Some designers may know W3C standards or equivalents but nothing requires the designers to use them.

Repeatable but still undefined Level 2

Individual web designers can. through personal and group experience, estimate page size, say in units of HTML elements and attributes. Estimation enables better pricing against requirements. Some quality control is in place, e.g. using validation tools, maybe user trials, but the final verdict on suitability of web sites for clients rests in judgements of individual designers. Should those designers leave the organization, their replacements have primarily prior products but not necessarily any documented experience to repeat the process or achieve comparable quality.

Defined Level 3

Here, the organization owns the process which is codified and used for measurement of both project management and product quality. For example, a wire frame or design tool might be not a designer option but rather a process requirement subject to peer review. Standards such as W3c might be applied but are not as significant for capability maturity as that SOME process is defined and followed.

Managed Level 4

At this level, each project can be measured for both errors in product and process with the goal of improvement. Bug reports and accessibility complaints should lead to identifiable process failures and then changes.

Optimizing Level 5

Beyond Managed Level 4, processes can be optimized for new tools and techniques using measurements and data rather than guesswork. For example, is “progressive enhancement” an improvement or not?” can be analytically framed in terms of bug reports, customer complaints, developer capabilities, product lines expansion, and many other qualities.

How well does CMM apply to accessibility?

Personally, I’m not at all convinced a CMM focus would matter in many environments, but still it’s a possible way to piggy back on a movement that has influenced many software industry thinkers and managers.

Do standards raise process quality?

It seems obvious to me that standards such as W3C raise awareness of product quality issues that force process definition and also provide education on meeting the standards. But is a well defined standard either necessary or sufficient for high quality processes?

An ALT tag standard requires some process point where ALT text is constructed and entered into HTML. A process with any measurement of product quality will involve flagging missing ALT texts which leads to process improvement because it’ is so patently silly to have required rework on such a simple task. Or are ALT tags really that simple? A higher level of awareness of how ALT tags integrate with remaining text and actually help visually impaired page users requires more sensitivity and care and review and user feedback. The advantage of standards is that accessibility and usability qualities can be measured in a research context with costs then amortized across organizations and transformed into education expenses. So, the process improvement doesn’t immediately or repeatably lead to true product quality, but does help as guidance.

Does CMM apply in really small organizations?

Many web development projects are contracted through small one-person or part-time groups. Any form of measurement represents significant overhead on getting the job done. For this, CMM spawned the Personal and Team Software Processes for educational and industrial improvements. Certainly professionals who produce highly accessible web sites have both acquired education and developed some form of personal discipline that involved monitoring quality and conscious improvement efforts.

Should CMM influence higher education?

On the other hand, embedded web development may inherit its parent organization quality and development processes, e.g. a library or IT division of a university. Since the abysmal level of accessibility across universities and professional organizations suggest lack of attention and enforcement of standards is a major problem. My recorded stumbling around Computer Science websites surfaced only one organization that applied standards I followed to navigate web pages effectively, namely, University of Texas, which has a history of accessibility efforts. Not surprisingly, an accessibility policy reinforced with education and advocacy and enforcement led small distributed departmental efforts to better results. Should by lawsuit or even education commitment to educational fairness for persons with disability suddenly change the law of the land, at least one institution stands out as a model of both product and process quality.

Organizations can define really awful processes

A great example of this observation is Unrepentant’s blog and letter to DoJ about PDF testimonies. Hours of high-minded social justice and business case talk was represented in PDF of plaint text on Congressional websites. Not only is PDF a pain for visually impaired people, no matter how much it applies accessibility techniques, the simple fact of requiring an application external to the browser, here Adobe Reader, is a detriment to using the website on many devices such as my Levelstar Icon or smart phones. My bet is that sure enough there’s a process on Congressional websites, gauged to minimize effort by exporting WORD docts into PDF and then a quick upload. The entire process is wrong-headed when actual user satisfaction is considered, e.g. how often are citizens with disabilities and deviant devices using or skipping reading valuable testimony and data? Indeed, WCAG standards hint, among many other items, that, surprise, web pages use HTML that readily renders strings of texts quite well for reading across a wide variety of devices, including assistive technology.

The message here is that a Level 3 process such as “export testimony docs as PDF” is detrimental to accessibility without feedback and measurement of actual end usage. The Unrepentant blogger claims only a few hours of work required for a new process producing HTML, which I gratefully read by listening on the device of my choice in a comfortable location and, best of all, without updating the damned Adobe reader.

Quality oriented organizations are often oblivious about accessibility

The CMM description in the URL at the start of this article is short and readable but misses the opportunity to include headings, an essential semantic markup technique. I had to arrow up and down this page to extract the various CMM levels rather than apply a heading navigation as in this blog post. Strictly speaking the article is accessible by screen reader but I wouldn’t hire the site’s web designer if accessibility were a requirement because there’s simply much more usability and universality well worth applying.

I have also bemoaned the poor accessibility of professional computing organization websites>. Until another generation of content management systems comes along, it’s unlikely to find improvement in these websites although a DoJ initiative could accelerate this effort.

CMM questions for managers, developers, educators, buyers, users

So, managers, are your web designers and organization at the capability level you desire?

How would you know?

  1. Just sample a few pages in WAVE validator from Errors flagged by WebAim are worth asking web developers? do these errors matter? how did they occur? what should be changed or added to your process, if any? But not all errors are equally important, e.g. unlabelled forms may cause abandoned transactions and lost sales while missing ALT tags just indicate designer ignorance. And what if WAVE comes up clean? Now you need to validate the tool against your process to know if you’re measuring the right stuff. At the very least, every manager or design client has a automated feedback in seconds from tools like WAVE and a way to hold web developers accountable for widespread and easily correctable flaws.
  2. Ask for the defined policy. would an objective like W3C standards suffice? Well, that depends on costs within the organization’s process, including both production and training replacements.
  3. Check user surveys and bug reports. Do these correspond to the outputs of validation tools such as WebAim’s WAVE?
  4. Most important, check for an accessibility statement and assure you can live with its requirements and that they meet social and legal standards befitting your organizational goals.

Developers, are you comfortable with your process?

Level 1 is often called “ad hoc” or “chaotic” for a reason, a wake up call. For many people, a defined process seems constraining of design flexibility and geek freedom. For others, a process gets out of the way many sources of mistakes and interpersonal issues about ways of working. Something as trivial as a missing or stupid ALT tag hardly seems worthy of contention yet a process that respects accessibility must at some point have steps to insert, and review ALT text, requiring only seconds in simple cases and minutes if a graphic lacks purpose or context, with many more minutes if the process mis-step shows up only in a validator or user test. Obviously processes can have high payoffs or receive the scolding from bloggers like Unrepentant and me if the process has the wrong goal.

Buyers of services or products for web development, is CMM a cost component?

Here’s where high leverage can be attained or lost. Consider procuring a more modern content management system. Likely these vary in the extent to which they export accessible content, e.g. making it easier or harder to provide semantic page outlines using headings. There are also issues of accessibility of the CMS product functions to support developers with disabilities.

In the context of CMM, a buyer can ask the same questions as a manager about a contractor organizations’ process maturity graded against an agreed upon accessibility statement and quality assessment.

Users and advocates, does CMM help make your case?

If we find pages with headings much, much easier to navigate but a site we need to use lacks headings, it’s constructive to point out this flaw. It seems obvious that a web page with only an H4 doesn’t have much process behind its production, but is this an issue of process failure, developer education, or missing requirements? If, by any chance, feedback and complaints are actually read and tracked, a good manager would certainly ask about the quality of the organization’s process as well as that of its products.

Educators,does CMM thinking improve accessibility and usability for everyone?

Back to software engineering, getting to Level 5 was a BFD for many organizations, e.g. related to NASA or international competition with India enterprises. Software engineering curricula formed around CMM and government agencies used it to force training and organizational change. The SEI became a major force and software engineering textbooks had a focus for several chapters on project management and quality improvement. Frankly, as a former software engineering educator, I tended to skim this content to get to testing which I considered more interesting and concrete and relevant.

By the way, being sighted at the time, I didn’t notice the omission of accessibility as a requirement or standards body of knowledge. I have challenged Computing Education blogger and readers to include accessibility somewhere in courses, but given the combination of accreditation strictures and lack of faculty awareness, nothing is likely to happen. Unless, well, hey, enforcement just might change these attitudes. My major concern is that computing products will continue to be either in the “assistive technology ghetto” or costly overhauls because developers were never exposed to accessibility.

Looking for exemplars, good or bad?

Are there any organizations that function at level 5 for accessibility and how does that matter for their internal costs and for customer satisfaction as well as legal requirements?

Please comment if your organization has ever considered issues like CMM and where you consider yourself in a comparable level.

Story: A Screen Reader Salvages a Legacy System

This post tells a story of how the NVDA Screen Reader helped a person with vision loss solve a former employment situation puzzle. Way to go, grandpa Dave, and thanks for permission to reprint from the NVDA discussion list on

Grandpa Dave’s Story

From: Dave Mack
To: nvda

Date: Oct 29

Subj: [nvda] Just sharing a feel good experience with NVDA
Hi, again, folks, Grandpa Dave in California, here –
I have hesitated sharing a recent experience I had using NVDA because I know this list is primarily for purposes of reporting bugs and fixes using NVDA. However, since this is the first community of blind and visually-impaired users I have joined since losing my ability to read the screen visually, I have decided to go ahead and share this feel-good experience where my vision loss has turned out to be an asset for a group of sighted folks. A while ago, a list member shared their experience helping a sighted friend whose monitor had gone blank by fixing the problem using NVDA on a pen drive so I decided to go ahead and share this experience as well – though not involving a pen drive but most definitely involving my NVDA screen reader.

Well, I just had a great experience using NVDA to help some sighted folks where I used to work and where I retired from ten years ago. I got a phone call from the current president of the local Federal labor union I belonged to and she explained that the new union treasurer was having a problem updating their large membership database with changes in the union’s payroll deductions that they needed to forward to the agency’s central payroll for processing. She said they had been working off-and-on for almost three weeks and no one could resolve the problem even though they were following the payroll change instructions I had left on the computer back in the days I had written their database as an amateur programmer. I was shocked to hear they were still using my membership database program as I had written it almost three decades ago! I told her I didn’t remember much abouthe dBase programming language but I asked her to email me the original instructions I had left on the computer and a copy of the input commands they were keying into the computer. I told her I was now visually impaired, but was learning to use the NVDA screen reader and would do my best to help. She said even several of the Agency’s programmers were
stumped but they did not know the dBase program language.

A half hour later I received two email attachments, one containing my thirty-year-old instructions and another containing the commands they were manually keying into their old pre-Windows computer, still being used by the union’s treasurer once-a-month for payroll deduction purposes. Well, as soon as I brought up the two documents and listened to a comparison using NVDA, I heard a difference between what they were entering and what my instructions had been. They were leaving out some “dots, or periods, which should be included in their input strings into the computer. I called the Union’s current president back within minutes of receiving the email. Everyone was shocked and said they could not see the dots or periods. I told them to remember they were probably still using a thirty-year-old low resolution computer monitor and old dot-matrix printer which were making the dots or periods appear to be part of letters they were situated between.

Later in the day I got a called back from the Local President saying I had definitely identified the problem and thanking me profusely and said she was telling everyone I had found the cause of the problem by listening to errors non of the sighted folks had been able to see . And, yes, they were going to upgrade their computer system now after all these many years. (laughing) I told her to remember this experience the next time anyone makes a wisecrack about folks with so-called impairments. She said it was a good lesson for all. Then she admitted that the reason they had not contacted me sooner was that they had heard through the grapevine that I was now legally blind and everyone assumed I would not be able to be of assistance. What a mistake and waste of time that ignorant assumption was, she confessed.

Well, that’s my feel good story, but, then, it’s probably old hat for many of you. I just wanted to share it as it was my first experience teaching a little lesson to sighted people in my
own small way. with the help of NVDA. –

Grandpa Dave in California

Moral of the Story: Screen Readers Augment our Senses in Many Ways = Invitation to Comment

Do you have a story where a screen reader or similar audio technology solved problems where normal use of senses failed? Please post a comment.

And isn’t it great that us older folks have such a productive and usable way of overcoming our vision losses? Thanks, NVDA projectn developers, sponsors, and testers.

Crossing the RSS Divide – making it simpler and compelling

RSS is a web technology for distributing varieties of content to wide audiences with minimal fuss and delay, hence it’s name “Really Simple Syndication”. However, I’m finding this core capability is less well understood and perhaps shares barriers among visually impaired and older adult web users. This article attempts to untangle some issues and identify good explanatory materials as well as necessary web tools. If, indeed, there is an “RSS Divide” rather than just a poor sample of web users and my own difficulties, perhaps the issues are worth wider discussion.

So, what is RSS?

Several good references are linked below, or just search for “RSS explained”. Here’s my own framework:

Think of these inter-twined actions: Announce, Subscribe, Publish, Fetch, Read/Listen/View:

  1. Somebody (called the “Publisher”) has content you’re welcome to read. In addition to producing descriptive web pages, they also tell you an address where you can find the latest content., i.e. often called a “feed”. These are URLs that look like abc.rss or abc.xml and often have words or graphics saying “RSS”.
  2. When the Publisher has something new written or recorded, they or their software, add an address to this feed, i.e. they “publish”. For example, when I publish this article on WordPress, the text will show up on the web page but also my blog feed will have a new entry. You can keep re-checking this page for changes, but that’ wastes your time, right? And sooner or later, you forget about me and my blog, sniff. Here cometh the magic of RSS!
  3. You (the “Subscriber”) have a way, the RSS client of tracking my feed to get the new article. You “subscribe” to my feed by adding its address to this “RSS client”. You don’t need to tell me anything, like your email, just paste the address in the right place to add to the list of feeds the RSS client manages for you. However, s
  4. Now, dear subscriber, develop a routine in your reading life where you decide, “ok, time to see what’s new on all my blog subscriptions”. So you start your RSS client which then visits each of the subscribed addresses and identifies new content. This “Fetch” action is like sending the dog out for the newspapers, should you have such a talented pet. The client visits each subscribed feed and notes and shows how many articles are new or unread in your reading history.

  5. At your leisure, you read the subscribed content not on the Publisher’s website but rather within the RSS client. Now, that content might be text of the web page, or audio (called podcasts), or video, etc. RSS is the underlying mechanism that brings subscribed content to your attention and action.

What’s the big deal about RSS?

The big deal here is that the distribution of content is syndicated automatically and nearly transparently. Publishers don’t do much extra work but rather concentrate on their writing, recording, and editing of content. Subscribers bear the light burden of integrating an RSS client into their reading routines, but this gets easier, albeit with perhaps too many choices. Basically, RSS is a productivity tool for flexible readers. RSS is especially helpful for those of us who read by synthetic speech so we don’t have to fumble around finding a web site then the latest post — it just shows up ready to be heard.

Commonly emphasized, RSS saves you lots of time if you read many blogs, listen to podcasts, or track news frequently. No more trips to the website to find out there’s nothing new, rather your RSS client steers you to the new stuff when and where you’re ready to update yourself. I have 150 currently active subscriptions, in several categories: news (usatoday, cnet, science daily, accesstech,…); blogs (technology, politics, accessibility, …), some in audio. It would take hours to visit all the websites, but the RSS client spans the list and tells me of new articles or podcasts in a few minutes while I’m doing something else, like waking up. With a wireless connection for my RSS client, I don’t even need to get out of bed!

This means I can read more broadly, not just from saving time, but also having structured my daily reading. I can read news when I feel like tackling the ugly topics of the day, or study accessibility by reading blogs, or accumulate podcasts for listening over lunch on the portico. Time saved is time more comfortably used.

Even more, I can structure and retain records of my reading using the RSS client. Mine arranges feeds in trees so I can skip directly to science if that’s what I feel like. I can also see which feeds are redundant and how they bias their selections.

So, RSS is really a fundamental way of using the Web. It’s not only an affordance of more comfort, but also becoming a necessity. When all .gov websites, local or national, plus all charities, etc. offer RSS feeds, it’s assumed citizens are able to keep up and really utilize that kind of content delivery. For example,> has feeds for news releases and articles by various officials that complement traditional news channels with more complete and honestly biased content, i.e. you know exactly the sources, in their own words.

The down side of RSS is overload, more content is harder to ignore. That’s why it’s important to stand back and structure reading sources and measure and evaluate reading value, which is enabled by RSS clients.

Now, about those RSS clients

After 2+ years of happily relying on the Levelstar Icon Mobile Manager RSS client, I’m rather abashed at the messy world of web-based RSS clients, unsure what to recommend to someone starting to adopt feeds.

  1. Modern browsers provide basic support for organizing bookmarks, with RSS feeds as a specific type. E.g. Firefox supports “live bookmarks”, recognizing feeds when you click the URL. A toolbar provides names of feeds to load into tabs. Bookmarks can be categorized, e.g. politics or technology. Various add-on components provide sidebar trees of feeds to show in the main reading window. Internet Explorer offers comparable combinations of features: subscribing, fetching, and reading.

  2. Special reader services expand these browser capabilities. E.g. Google Reader organizes trees of feeds, showing number of unread articles. Sadly, Google Reader isn’t at this moment very accessible for screen readers, with difficult to navigate trees and transfer to text windows. Note: I’m searching for better recommendations for visually impaired readers.
  3. I’ve not used but heard of email based RSS readers, e.g. for Outlook. Many feed subscriptions offer email to mail new articles with you managing the articles in folders or however you handle email.
  4. Smart phones have apps for managing feeds, but here again I’m a simple cell phone caller only, inexperienced with mobile RSS. I hear Amazon Kindle will let you buy otherwise free blogs.
  5. Since podcasts are delivered via feeds, services like Itunes qualify but do not support full-blown text article reading and management.

So, I’d suggest first see if your browser version handles feeds adequately and try out a few. Google Reader, if you are willing to open or already have a Google account, works well for many sighted users and can be used rather clumsily if you’re partially sighted like me. Personally, when my beloved Icon needs repair, I find any of the above services far less productive and generally put my feed reading fanaticism on hiatus.

Note: a solid RSS client will export and import feeds from other clients, using an OPML file. Here is Susan’s feeds on news, technology, science, Prescott, and accessibility with several feeds for podcasts. You’re welcome to save this file and edit out the feed addresses or import the whole lot into your RSS client.

Is there more to feeds in the future?

You betcha, I believe. First, feed addresses are data that are shared on many social media sites like Delicious feed manager. This enables sharing and recommending blogs and podcasts among fans.

A farsighted project exploiting RSS feeds is Jon Udell’s Elm City community calendar project. The goal is to encourage local groups to produce calendar data in a standard format with categorization so that community calendars can be merged and managed for the benefit of everybody. Here’s the Prescott Arizona Community Calendar.

The brains behind RS are now working on more distributed real-time distribution of feeds, Dave Winer’s Scripting News Cloud RSS project.

In summary, those who master RSS will be the “speed readers” of the web compared to others waiting for content to show up in their email boxes or wading through ads and boilerplate on websites. Indeed, many of my favorite writers and teachers have websites I’ve never personally visited but still read within a day of new content. This means a trip to these websites is often for the purpose of commenting or spending more time reviewing their content in detail, perhaps over years of archives.

References on RSS

  1. What is RSS? RSS Explained in simple terms

  2. Video on RSS in Plain English
    emphasizing speedy blog reading in web-based RSS readers

  3. Geeky explanations of RSS from Wikipedia

  4. RSS links and explanation (semi-geeky)

  5. Examples of feeds
  6. Diane Rehm podcast show feed

Thinking about Blindness, Risks, and Safety Trade-offs

Facing safety trade-offs through risk management

It’s time to structure my wanderings and face denial about the special problems of dangers of living with partial eyesight. This post starts a simple framework for analyzing risks and defining responses. Sighted readers may become aware of hassles and barriers presented to Vision Losers who may learn a few tricks from my experience.

Life is looking especially risky right now: financial follies, pirate attacks, natural disasters, ordinary independent activities, … A Vision Loser needs special precautions, planning, and constant vigilance. So, here I go trying to assemble needed information in a format I can use without freaking myself back into a stupor of denial.

Guiding Lesson: Look for the simplest rule that covers the most situations.

Appeals to experts and clever web searches usually bring good information, lots of it, way more than I can use. I discussed this predicament in the context of Literacy when I realized I couldn’t read the pie charts sufficiently well to understand asset allocations. I had 500 simulations from my “wealth manager”, projections to age 95, and my own risk profiles. But what I needed was a simple rule to live by, that fit these, now absurd, models, like

“Live annually on 4% of your assets”.

Another rule, one I obey, that could have saved $trillions is like:

Housing payment not to exceed 1/3 Income.

Such rules help focus on the important trade-offs of what we can and cannot do sensibly rather than get bogged down in complex models and data we can’t fully understand or properly control. If we can abstract an effective rule from a mass of details, then we might be able to refresh the rule from time to time to ask what changes in the details materially affect the rule and what adjustments can cover these changes. We can also use generally accepted rules to validate and simplify our models. This is especially important for the partially sighted since extra work goes into interpreting what can be seen and considerable guess work into what’s out there unseen.

I need comparable safety rules to internalize, realizing their exceptions and uncertainty. Old rules don’t work too well, like “Look both ways before crossing the street”. also listen, but what about silent cars. Or “turn on CNN for weather information” if I can’t read the scrolling banners.

Background from Software risk management

When I taught software engineering, the sections on project management always emphasized the need for Risk Management in the context of “why 90% of software projects fail”. This subject matter made the basis for a good teamwork lab exercise: prioritize the risks for a start up project. I dubbed this hypothetical project Pizza Central, a web site to compare local pizza deals and place orders, with forums for pizza lovers. Since all students are domain experts on both pizza deliveries and web site use, they could rapidly fill out a given template. Comparing results always found a wide divergence of risks among teams, some focused on website outage, others on interfaces, some on software platforms. So, one lesson conveyed among teams was “oops, we forgot about that”. My take-away for them was that this valuable exercise was easy enough to do but required assigned responsibilities for mitigating risks, tracking risk indicators, and sometimes unthinkable actions, like project cancellation.

I am about to try a bit of this medicine on myself now. Risk is a complicated subject, see Wikipedia. I’ll use the term as “occurrence of a harmful event” in the context of a project or activity. The goal is to mitigate both the occurrences and effects of these nasty events. But we also need indicators to tell when an event is ongoing or has happened. Since mitigation has a cost of response both to prevent and recover from events, it helps to have prioritization of events by likelihood and severity. So, envision a spreadsheet with event names, ratings for likelihood, severity, and costs, perhaps with a formula to rank importance. Associated with these events are lists of indicators, proposed mitigation actions with estimated costs. This table becomes part of a project plan with assigned actions for mitigations and risk tracking awareness across team members as a regular agenda item at project meetings..

Risk analysis for my workout/relaxation walk

I will follow this through on the example of my daily workout walk. I do not use my white cane because I feel safe enough, but really, is this a good tradeoff? Without the cane, I can walk briskly, arms swinging, enjoying shadows, tree outlines, and the calls of quail in the brush. The long white cane pushes my attention into the pavement, responding to minor bumps and cracks my strides ignore, and there’s even a rhythm to the pavement that adjusts my pace to a safe sensation. I would not think of walking without my guiding long white cane on a street crowded with consumers or tourists but this walk covers familiar terrain at a time frequented by other recreational walkers. This situation is a trade-off unique to the partially sighted, who only themselves can know what they can safely see and do, living with the inevitable mistakes and mishaps of the physical world.

Here are a few events, with occasional ratings on a 1-10 scale. For this application, I feel it’s more important to ask the right questions, albeit some silly, to surface my underlying concerns and motivate actions.

  1. Event: Struck by lightning, falling tree, or other bad weather hazard

    <Indicators<:Strong winds, thunder, glare ice

    <likelihood<: 8, with walks during

    <Severity<: 9, people do get whacked

    <Mitigation Actions and costs:<

    • -7, look for dark clouds. but Can’t see well enough in all directions over mountains
    • 0, Listen for distant thunder, also golf course warning sirens
    • -1, check CNN and weather channels, but hard to find channel with low accessibility remote and cable box, also reading banners and warning screens not always announced. FIND RELIABLE, USABLE WEATHER CHANNEL, ADD TO FAVORITES
    • Ditto for Internet weather information, but I never am sure I am on a reliable up-to-date website or stream, especially if ad supported
    • Ditto for Radio, using emergency receiver. ACTION: set up and learn to use.
    • For ice patches, choose most level route, beware of ice near bushes where sunlight doesn’t reach for days after a storm, walk and observe during afternoon melting rather than before dusk freezing

    Summary: I should keep emergency radio out and tuned to a station. ACTION needed for other threats than weather, also.

  2. Event: Trip over something

    <Indicators<: Stumbling, breaking stride, wary passers-by

    <likelihood<: 5,

    <Severity<: 6

    <Mitigation Actions and costs:<

    • 0, Follow well-defined, familiar route with smooth pavements, rounded curbs – I DO THIS!
    • Never take a short cut or unpaved path.
    • $100, wear SAS walking shoes with Velcro tabs, NO SHOE LACES to trip over
    • 0, detour around walkers with known or suspected pets on leashes, also with running kids or strollers.
    • 0, take deliberate steps up and down curbs, use curb cuts where available. Remember that gutters below curbs often slope or are uneven. Don’t be sensitive that people are watching you “fondle the curb”.
    • Detour around construction sites, gravel deliveries, … Extra caution on big item trash pickup days when items might protrude from trash at body or head level.
    • Detour around bushes growing out over sidewalks, avoiding bush runners, also snakes (yikes)

    Summary: I feel safe from tripping now that I have eliminated shoe laces and learned, the hard way, not to take curbs for granted.

  3. Event: Hit by some vehicle

    <Indicators<: Movement, perhaps in peripheral vision; noise

    <likelihood<: 5

    <Severity<: 7

    <Mitigation Actions and costs:<

    • 0, stay on sidewalks, if not overgrown by brush
    • 1, walk when others are out and about, expecting auto and bicycle drivers to be aware
    • find a safe, regular road crossing, away from an irregular intersection, and jay walk. Is this wise?
    • Do not walk at times of day when sun may blind drivers, e.g. winter days when sunsets are long and low
    • Do not trust ears. Bicycles are quiet on smooth pavements, move rapidly down hill. Also hybrid cars may run silently.
    • Halt completely when in the vicinity of noisy delivery trucks or car radios. Blending hearing and seeing requires both be at maximum capacity.
    • Remember that eerie white cross memorial indicating a dangerous intersection with cars coming around a blind curve and often running stop sign. Also shout at speeders and careless drivers.
    • REJECTED: Use white cane to warn others I’m limited at seeing them. I don’t think the white cane adds more warning than my active body motion.

    Summary: I am currently using 3 safe routes, must not let mind wander at each intersection and crossing. ACTION: sign a petition for noise indicators on silent motors.

  4. Event: Getting lost

    <Indicators<Unfamiliar houses, pavements, in intersections

    <likelihood< 1,

    <Severity<: 1

    <Mitigation Actions and costs:<

    • Follow same routes through established neighborhoods
    • $1000, get GPS units and training. Consider when I move and need to define new walking routes.
    • Beware or boredom to tempt alternate routes.

    Summary: I used to get lost, turned around in neighborhoods, no longer take those excursions. 3 regular walking paths will do.

  5. Event: Cardiac attack

    <Indicators<: frequent stops, pain, heavy breathing

    <likelihood<: Hey, that’s why I do these walks, to build breathing stamina at an altitude of 5000 ft with several serious up and down hill stretches.

    <Severity<: Something’s gonna get me, hope it’s quick.

    <Mitigation Actions and costs:<

    • Exercise regularly to maintain condition.
    • Checkup when Medicare allows and physicians are available (thanks U.S. health care system)

    Summary: Not to worry as long as walks feel good.

Risk Management Summary

I choose this walk as my primary exercise activity, have integrated it into my daily routine, and generally feel better as well as safe. Eliminating shoe laces removed a major stupid cause of minor stumbling and potential falls. I have avoided unsafe and confusing trajectories. My main fears are: Fedex or UPS delivery trucks, fast downhill bikes, pet greetings, loose children, persistent brush-hidden ice patches. My cane would, in this environment, change attention from moving objects toward pavement which is smooth and uncluttered. The cane would do little to warn off threats — they either notice me or not. I choose to balance my partial sight used cautiously with improving listening skills and opt to walk faster and more comfortably without the leading cane and its frequent catches in cracks and grass.

Actions: While walking may not be the main reasons, I must gear up with that emergency radio for other threats. More generally, I must learn about emergency information sources that fit my vision capabilities.

References on Risks

  1. Wikipedia on Risk
  2. How to for risk management
  3. Risks to the public using software, decades of examples of software-related events and management as risks
  4. ‘Nothing is as Simple’ blog, a phrase to remember and examples
  5. Previous post on Literacy and reading charts, how I discovered I couldn’t read pie chart data
  6. Previous Post ‘Grabbing my Identity Cane to Join the Culture of Disability’. I have now progressed through orientation and mobility training to using a longer cane with a rolling tip.
  7. Emergency preparedness checklists for Vision Losers — TBD

Is there a Killer App for Accessibility?

This post speculates about alternative changed futures for accessibility, such as cost-busting open source developments; self-voicing interactions; over riding inaaccessibilityty by proxy web servers; a screenless, voiced, menu-driven PDA; and higher level software design practices.

An mp3 Youtube converter converted me!

First, I digress to tell you about a cool utility that invoked the serendipity behind this posting. Blind Cool Tech has a podcast, Jan. 1 2008, on a “You tube to iPod converter”. I haven’t used much since the videos appear to my partial sight as white blobs with some hand waving going on. Last week, I began to rethink my intellectual aversion to mindless drivel I feared populated Youtube and affronted my blindness sensibilities. The NYTimes had a piece on “Big Think”, a Youtube for eggheads that promised a variety of magazine-style videos of the ilk that interested me, namely politics and economics, reminiscent of the university-based video series at research

Wow, this little piece of software Youtube to iPod converter really delivers and opened up a new way for me to get useful web information. The use case is: copy the URL for a video that interests you, the link you would click to invoke the viewer; paste the link into the accessible converter; choose a file name and location; choose the format type mp3; click “download and convert”; wait a while; listen to the mp3 or your PC or send it on to a digital player, in my case my Bookport from With a bit of imagination and patience, you can mentally fill in the video and also have a version to replay or bookmark. Moral of this digression: once again podcasts from the blind community open new worlds for us new vision losers needing accessible software to stay in the mainstream. Thank you, blind cool tech podcaster Brandon Heinrich! Check out my page of Youtube converted videos on eyesight-related topics.

Youtube video on WebAnywhere Reader

By sheer luck, the first You Tube search I chose was the term “screen reader” and it turned up a provocative demo and discussion:

University of Washington Research: Screen Reader in a Browser by Professor Richard Ladner and graduate student Jeffrey P Bigham in the Web Insight project at cs.washingting .edu

Briefly, this experimental work addresses the problems of costly screen readers and the need for on-the-fly retrieval of web information by blind users away from their familiar screen readers. The proposed solution is a browser adaptation adding a script that redirects web pages to a so-called proxy server that converts the structure of the page, known as its document object, to text and descriptions that are returned to the browser as speech. This is pretty much what a desktop screen reader does, only now the reader and speech functions are remote. Of course, there are a gazillion problems and limits to this architecture but it appears to work sufficiently reliably and rapidly to achieve the social goals of its name, “Web Anywhere”. This research project, funded by the National Science Foundation, has also used the above architecture to modify web pages to add ALT tags from link texts, OCR of the image, and social networking tagging of images. Not only is the technology very clever, but also the work is based on observations of how blind users use the web and on a growing appreciation of the complexity and often atrocious design of web pages and use of AJAX technology that frustrate visually impaired web users, no matter the power of their screen readers or magnifiers or their skills.

As a former employee of funding agency NSF, a reviewer of dozens of proposals, a Principal Investigator in my sighted days on Computer Security education using animation, let me tell you this U. Washington project is a great investment of taxpayer funds. The work is innovative, well portrayed for outreach at at, addressing monumentally important global and social issues, and helping to bring about a better educated and motivated generation of developers and technology advocates on accessibility issues.
Now, is this proxy-based architecture the killer app for web accessibility? Possibly, with widespread support of IT departments and developers, but the project sets it goals more modestly as “Web Everywhere” for transient web uses and possibly more broadly to address the cost of current screen reader solutions. Maybe the proxy-based approach can be expanded to other uses in demonstrations and experiments on a range of accessibility problems.

Will free screen readers shake up the rehab industrial world? My pick is NVDA

In one sense, a no-cost screen reader provides a way of breaking up the current market hierarchy, which one might unfortunately describe as a cartel of disability vendors and service providers. Yes, the premier screen readers sell for $1000 which seems justifiable by the relatively small market, the few million U.S. and international English-speaking PC users who are blind and on the rehab grid. Some, like Blind Confidential blogger, blink, and industry insider suggest the assistive technology industry is doing fine financially, able to afford more R&D and QA, and attractive to foreign investors. Like any segment of the computer industry, buyers become comfortable with the licensing, personalities, training, upgrade policies, and help lines so therefore resist change. In the case of the $1k products, buyers are more likely not individuals but rather rehabilitation and disability organizations with a mandate to provide user support through a chain of trained technical, health, and pedagogical professionals. A screen reader like the NVDA nonVisual Desktop Access from will challenge this industry segment as more users find it suitable for their needs, as I have written about in“Look ma, no screens! NVDA is my reader” posting . With broader acceptance of open source as a reliable and effective mode of software enterprise, as nvda co-develops with other flexible open source office and browser products, as energetic developers fan out to other accessibility projects, well, nvda might well be the killer app of cost and evolution.

Should apps depend on screen readers or be self-voicing?

However, in a more radical sense, I argue that the screen reader model itself is badly flawed and that also technical accessibility alone is inadequate to resolve the needs of blind web users.

The value of a universal screen reader is that it can do something useful for most applications by dredging out fundamental information flowing through the operating system about an application’s controls and its users’ actions. But another model of software is so-called “self voicing” where the application maintains a focus system that tracks the user’s actions and provides its own reactions through a “speech channel”, providing at least equivalent information to an external screen reader. Such a model can do even better by providing flexible information about the context of a user event and preferences. A button might respond upon focus with “Delete”, or “Delete the marked podcasts in the table”, or repeat the relevant section of the user manual, or elaborate a description of the use case, such as “first, mark the podcasts to delete, and here’s how to mark, then press this button, and confirm the deletions, after which the podcast files will be off your disk unless you download them by another name”. Self-voicing as speech technology is implemented by many applications that allow choice of voice, setting speed, and even variation of voices matched to uses, e.g. the original message in an e-mail reply. More significantly, self-voicing puts the responsibility for usability of the application directly on a developer to provide consistent, coherent, and useful explanations of each possible user interaction. Further, this information is useful both to the end user and to testing professionals who can check that the operation is doing what it says, only what it should, and in the proper context of the application’s use cases. Ditto, a tech writer working with a developer can make an application far more usable and maintainable in the long run. So, we claim, that a kind of killer app development practice would be the shift of responsibility away from screen readers onto self-voicing applications, including operating systems, where development processes will be improved. We base our claims on personal experience developing a self-voicing podcatcher, @Podder, for partially sighted users using a speech channel of copying text to the clipboard to be read by external text-to-speech applications. Another self-voicing application is Kurzweil 1000 for scanning and document management, and employing the nicest spell checker around.

Can overcoming missing and muddled use cases conquer inaccessibility?

We have argued in “Are missing, muddled use cases the cause of web inaccessibility?” posting that the main culprit in web usability is not technical accessibility but the way use cases are represented, tangled, and obscured by links as well as graphics and widgets on web pages. A use case describes a sequence of actions performed to meet a specific goal, such as “register on a web site” or “archive e-mail messages”. Use cases not only lay out actions but also provide the rationale, the consequences, constraints, and error recovery procedures for interactions. Our claim is that software developers, both desktop and web application developers, force all users, sighted or blind, to infer the use cases from the page contents and layouts, often embellished with links, such as blog rolls, to enhance social interaction and increase search engine rankings. Reports such as those from the Web Insight project and the Nielsen Norman report “Beyond ALT text” describe in gory detail the frustrations and failures of visually impaired users struggling with their screen readers and magnifiers and braille displays to overcome the practice of poor use case representation as they try to keep up with sighted users in gaining information from and performing consumerism within the constellation of current web sites. While I certainly believe that web accessibility activists are important to removing barriers and biases, the larger improvement will come when web sites are designed and clearly presented to achieve their use cases, for the benefit of all those who gain from better web site usage. This is already occurring with re-engineering for mobile devices where failure to activate a use case or have available the appropriate use case is especially apparent, and, seemingly, not really that hard to achieve.

How will mobile devices improve accessibility?

Finally, what about the marvelous mobile devices such as the fully voiced, menu-driven Levelstar Icon and APH Braille Plus Mobile Manager? After 8 months of Icon addiction, I firmly believe that, cost aside, this form of computer is far superior to conventional Internet usage for the activities it supports, mainly e-mail, RSS management, browsing, and access to resources. for example, I can consume the news I want in about an hour from NY Times, Washington Post, Wall Street Journal, Arizona Republic, CNN, InsiderHigherEd, CNET, and a host of blogs. And that’s BEFORE getting up in the morning. No more waiting for web pages to load on a news web site, browsing through categories on information that don’t interest me, and bypassing advertisements. Additionally, I am surprised at how often I use the Icon’s “Mighty Mo” embedded browser by wireless rather than open up the laptop to bring up Firefox and fend off all my update anxious packages and firewall warnings. Yes, life with the Icon is “living big”. the Icon is mainly part of the trend toward phones and wireless devices, but just happens to be developed by people who know what visually impaired users need and want.

Maybe, somewhere out there is a wondrous software package that will dramatically boos the productivity and comfort of visually impaired computer users. With some assurance, we can recognize an upcoming generation of open source oriented developers seasoned by traditional assistive technology and adept at both project organization and current software tools. Funders and support organizations can look ahead to utilization of their innovations and improvements. But maybe the core problem is much harder, as we claim, a disconnect in “computational thinking” between software designers who have found their way through models and user-oriented analysis and those web designers stuck at the token and speechless GUI level of browsers and web pages. Empirical researchers on accessibility are starting to witness and understand the fragility of users caught between artifacts designed for sighted users and clumsy, superhuman emulating tools such as screen readers and magnifiers while the proper responsibility for accessibility falls on developers who have yet to appreciate the power of readily available speech channels along side graphical user interfaces.

What do others think? Is their a “killer app” for accessibility? Comment on this blog at, “As Your World Changes” blog or e-mail to

Web Inaccessibility — Are Missing, Muddle Use Cases the Culprit?

Web Inaccessibility — Are Missing, Muddle Use Cases the Culprit?

As I have been learning to traverse websites using the nvda screen reader (previous post) I try to formulate the principles of design and implementation that make this task more or less productive, as well as pleasurable. At the same time, I have been tutoring myself in the accessibility literature, mostly in the form of blogs and podcasts. This post recounts some of my frustrations, diagnoses possible remedies, and a sweeping conjecture about the root cause of much web inaccessibility and difficult usability.

As I improve my proficiency with the nvda screen reader and learn to navigate web sites by voice and keyboard, I am constantly amazed at how hard it can be to get where you want to go and avoid heading down the many, well, blind alleys. I am an Internet veteran: first email around 1976, worked with protocol pioneer Jon Postel, saw Mosaic in late 1992, had my first web page in 1993, set up my first domain name and website in 1995, and several websites since, plus writing search analysis software, Java applets used around the world for security training, and a podcatcher for partially sighted people like me. However, all too often, I find myself fumbling, stumbling, and cursing my way around websites, wondering why using a browser with a screen reader is so difficult, error prone, and exhausting. Is it the tools I am using? or my admitted status as self-trained beginner in the low vision world? ignorant of accessibility tricks and techniques? or maybe I expect the task to be easier than possible, for me or others?

To document my environment: Windows XP on tablet PCs, Mozilla Firefox browser used for over 3 years, TextAloud toolbar for reading and zooming on pages, nvda screen reader used for 2 months as discussed in previous post, responsive natural synthetic voices, pretty good bandwidth on home wireless and cable. My main browser interactions: h for heading to page sections; k to links; control F for quick find page search; tab among page items; up and down arrows through lines; page up up, down, home, and end to page boundaries; INS + down to read consecutively down a page; INS + BLANK to pass through typing into form fields; control K to start a search; control L to open a new site.

Here are a few situations, complaints, diagnoses, and remedies.

Booking a flight on USAir, fondly known in Arizona as America West. I cannot find the boxes to query for flight schedules then make a choice and book the flight. So I reluctantly call the 800 number, beg my way out of the $10 booking penalty, and hope for a good fare. The problem in software design terms is that USAIR has scrambled its use cases together on the first page, providing last minute specials, detours to frequent flier data, wonderful offers of cruises and vacations, and practically everything the airline does. On a good sight day, I can locate the depart/arrive boxes to start, but screenless. Like many commercial booking websites, I give a rating of “hopeless jumble of links” although the sites may still conform to the letter of accessibility rules.

Amazon has also seemed like another jumble of links: recommendations, my account, searches for all kinds of items, invitations to become a seller. But, thankfully, there is a link to a more accessible streamlined page I can actually use most of the time. It is ironic that the needs for mobile users to see small screens coincides with the needs of visually impaired users to traverse streamlined web pages. This allows me to get most of a pre-defined purchase completed, going into exploration and recommendation mode when I choose rather than as obstacles on the route to a purchase. I still need sighted help to get the coupon numbers copied onto the purchase page, but transaction appear less daunting now on Amazon. Actually, on return recently, the website appears to be undergoing a makeover from accessibility experts – kudos to them!

Hurrah!! it is so exhilarating to see a simple page show up, just like the early days of the web, before images, adsense, navigation bars, dynamic content, etc. So, here is a remedy when doing battle with a complex commercial site: Look for a “basic HTML”, “mobile friendly”, “mobile optimized” link and throw back to the early days of the web. Thanks to Allison Sheridan for urging me in this direction on her vision-friendly NoscillaCast podcast.

How about search sites? Well, google is pretty good at separating its search results with headings with intervening links to google alternatives, including the extremely valuable “view as HTML” that avoids a cycle of save, import, export as text, and listen rather than open the usually unneeded Microsoft Word and Adobe PDF. On the other hand, the tagged and search-based gmail is a tangled overlay of use cases. For example, archiving messages by a filter requiring several steps down to a filter label, over to a select all link click, combo pull down to Archive list item, and return to Inbox. In single step mode,, one can check the box for conversations the find the archive button. Or there are keyboard shortcuts that my mind simply boggles at learning. At least reading gmail is enabled by a pop3 account on my Icon or, soon to be, fully voiced Mozilla Thunderbird. Google offers a separate search that weights accessibility into its search results, but I have not used it sufficiently to comment.

Blog readability varies a lot, but at least there is a common structure: entries with associated reply and comment fields; archives; blogrolls of links; and assorted meta data and added pages. The clincher is choice of template to place navigation bars relative to blog entries — right and below being best for screen readers and spoken RSS clients. While not easy, the wordpress dashboard is usable through a combination of good structure and parsimonious informative link labels.

Government Web Sites often, while conform ant with the so-called 508 mandate, follow a recognizable organizational or legislative hierarchy, sometimes with a touch of hilarity. I’m familiar with the NSF Fastlane proposal management system, which has changed little in the past 5 years except for an accretion of bureaucratic guano. It took me 17 tabs to get to the login box on one page, covering links about travel, registration, policies here, manuals there when the sole purpose most users would be on this page was to enter username then password. Later I found myself scrolling over a large block of text to the worksheet, a rendition of workload requirements that nobody in their right mind would read except for a hapless blind person who got stuck there. My complaints to a government representative were duly noted and agreed with but it will take a very fresh perspective to turn a bureaucratic haystack into a really usable website, well beyond the purview of accessibility standards that may simply divert attention to the wrong details.

I was highly impressed recently on the website when a link came up for screen readers. Following that path, I soon ran into the hilariously dumb “Click here” link text that should be a red flag for any accessibility analysis clickhere for what? And there was a sequence of “click here” on a page expressly designed for screen readers! Geez, where are the accessibility police?

Well, that’s enough complains, what does the literature of accessibility tell us? First, there are the common sense guidelines, see links below, that mention the sensible ordering, link text, graphic ALT tags, use of headings to reveal page structure. Any trip into the standards literature shows how complex the language and tradeoffs are, when compiled by a group of experts trying to reach a consensus — not an easy read for anybody, and a good excuse for routine web designers to avoid thinking about accessibility. The standout book for me is “Constructing Accessible Websites” which tours the landscape of HTML and CSS as well as the legal issues, e.g. can that routine web designer be held accountable for violating ADA laws?

Blogs such as “A List Apart”, WebAxe, WebAim, etc. often delve into highly technical issues of web accessibility at a feature and technology level. The tradeoffs of writing a web page one way or another are often poorly understood and tricky to articulate so the expense of apaplying a particular rule can be hard to justify. Indeed, my technical background combined with my accessibility needs leads me to commiserate with people who must deal with accessibility, especially late in website design or even later in mintenance, violating the software process rule that cost escalates with delay in addressing a solid requirement.

I have been confusing two terms here, “accessibility” and “usability”, with the latter my main concern. Accessibility is more technical in stipulating that the system stack of hardware, operating system, applications, and screen display provide sufficient and correct information about the screen data state and events to screen readers to interpret and pass onto uers. This architecture is historical and, I believe, wrong to its core now that we have a “speech channel” that could throw the responsibility for interpretation and amplification of data provided to bypass or supplement the screen reader ,, but that’s a future posting. Usability refers to the bottom line of whether users can complete the tasks at hand. Inaccessible features here and there may be barriers to usability but issues of separation of content and presentation, well-planned navigation, and display of the right stuff at the right time most determine usability.

For this Vision Loser, there is an internal batter reading of energy consumed by tasks, enabling me t to predict impossible tasks and schedule smaller chunks of work that can be completed. We have noted in our post on “Extreme Voting” that voting tasks fall in the range of Olympic events which must be completed under severe time constraints with no prior training or practice, complicated further by long ballots. To sighted people open to a comparable challenge: use a talking ATM to withdraw $100 in less than 1 minute.

A few conclusions are:

  • The book “Constructing Web Accessibility” validates my navigation complaints as common and often cured by “link to content”, “jump to sidebar”, modest sized navigation bars, supers mart screen readers able to recognize chunks of HTML as non-content to bypass, and and avoiding the pernicious dumb “click here” or “learn more” link. These are sign posts of attention of web site designers toward accessibility and techniques to improve my browsing practice.
  • Indeed, I am not fully empowered by my chosen screen reader to jump comfortably to all parts of a pages I will wait for the next version, partially developed under a Mozilla grant, to determine whether this youthful product is remiss and watch carefully for the productivity improvements noted above. Meantime, I can live with excess links as long as I know where I am situated on a page, e.g. by a “heading tour”.
  • I just can’t help but reverse engineer each transactional website into its use cases and mentally Write an introduction I wish were available as a spoken site overview.
  • The trend toward mobile pages offers a practical remedy for working on many websites, with hope for momentum to alter web design.

So, what is the big deal with “use cases”? The sweeping conclusion.

The concept is quite simple: a system’s design starts, in part, from a suite of named paths through the system’s eventual operations interleaved with those of users and other systems. Each use case has a precondition for its proper execution, a post condition stating the changes and outputs, and considerations of errors and options. In practice, a use case analysis can take several weeks and result in multiple pages of structured text and graphics, sometimes produced by CASE computer-assisted software engineering tools. This kind of stuff is taught presently in software engineering and object-oriented design analysis courses.

My complaint is that these use cases, whether explicit or not, are then mapped into a few web pages with forms, combo boxes, and text labels. The situation is close to what we called in the 1970s “spaghetti code” where control flow was woven through small sections of code because the state of programming languages did not sufficiently support modularity or the world view of object orientation. HTML is the assembly language that is unfortunately available to thousands of web designers not educated in the more advanced methodology and tool base that systematized programming to some extent.

The sighted person has an intuitive grasp of what each form needs and the physical agility to complete it and to detect and correct mistakes. The visually impaired person must somehow parse how the use cases, find the appropriate forms, meet the unidentified preconditions, find error message and fault locations, avoid cancel buttons, and complete the task before a time-out, wireless failure, or automatic PC update invalidates minutes, or hours, of tedious work.

Note again how nicely the “mobile revolution” can cooperate with accessibility. A site developer must identify the most important use cases to place on a mobile-friendly page, strip off ads and special offers, put the function forms prominent, and not clutter the page with navigation. That busy traveler needing to order a gadget without recommendation use cases, frequent purchaser signups, and the latest added options’ — and so does the visually impaired user. Separate and save the recommendations, special offer shopping, and account management until the transaction is completed or for idle browsing moments.

Looking back at our examples: a flight schedule lookup should not be cluttered by cruise offers; a log-on to enter a review system should not be over-run with travel instructions; a workload policy is worth no more than a link off a worksheet page, at worst at the bottom; navigation bars aren’t relevant to most use cases; and a mail archive is a lot different than an email lookup. Accessibility writings warn of the difficulties presented by mixing presentation with structured content, e.g. omitting headings. An insidious practice seems to be the desire to make all use cases available on a single page. This is a weird form of optimization I wish someone could explain to me. Is this optimization a root cause of broken rules of accessibility, poor structure, an insurmountable challenge to screen readers, and a constant pain to visually impaired users?

Not surprisingly, a search on terms “use cases and assistive technology” or “use cases web page accessibility” shows some interest in this topic in the w3c and usability communities. My epiphany for my own learning and continued improvement in web skills is that it helps to construct a mental map of the use cases and how they are implemented in the navigation and interaction items of a website, whether on a single page or across a site. My wish is that web page designers would present an overview of their web site in use case terms. In the longer term, it would be great to have multiple presentations, such as the trend toward mobile-friendly pages, where the use cases are sufficiently separated into separate pages that the mental load of intuiting and remembering the use cases becomes less critical to successful use of their sites.

Recently, I ran across JumpChart, a web page design tool that supports what usability people call a wire frame. This tool is exactly the place to interject both accessibility concerns and mechanisms for supporting accessibility.

Wow, is there a lot of substance to this topic. I hope soon to find counter-example websites to the troubles I attribute to missing and muddled use cases, as well as highly accessible pages in the technical and usability sense. Finally, my own mea culpa for all the stupid stuff I have dropped onto websites and made usability harder — I am working to correct my bad style. I haven’t addressed the Target lawsuit or capcha or other biases and so much more is known about hacks and techniques for accessibility. See our podcast library for hours of informative listening.


  1. Guidelines for 508 government website mandate
  2. Recommendations for accessibility from MIT
  3. Amazon entry and reviews of “Constructing Accessible Website” book and “Constructing accessible websites also available from Bookshare
  4. accessibility consulting and resources from Jim Thatcher
  5. Webaxe accessibility tips and podcast
  6. <a href=”;?Wikipedia article on “use cases”
  7. ` WebAIM blog roundup of blogs on accessibility

  8. JumpChart web design service
  9. nvda, nonVisual Desktop Access free, open source screen reader
  10. podcast library on “web accessibility”, collected by @Podder podcatcher

Mouse Hacks, Magnifiers, and Being Your Own System Integrator

In this post, we look for ways to reduce the costs of our computing environment as we deal with vision loss. Magnifiers are helpful, sometimes essential, and, we show, can be very low-cost with additional benefits.

Assistive Technology (abbreviated AT) software comes in several cost categories: built-in, $0, $50, $500, and $1000. The “big AT” vendors sell to individuals, of course, but the main market is the IT and A.D.A. support organizations of government agencies and employers, i.e. the “budgets”. I claim that an independent Vision Loser can save by becoming a System Integrator of sorts avoiding not only costs of acquiring “Big AT”, but also reducing complexity of installation, maintenance, and training.

Here’s a little case study in System Integration, First, some caveats: I am neither a trained rehab/AT specialist nor an experienced System Integrator. But I did go to conference with these types and have assembled a library of podcasts and web articles with excellent advice.

What we are calling a “System Integrator” is someone who looks at how components work individually and composes a new “system” where the components work together to achieve a goal. With the uncertainty of progressive vision loss, a worthy goal is frequently a kind of testbed to experiment with techniques that compensate for vision deficiencies and offer a measure of comfortable use. Experimental results may lead to identification of a suitable product or provide experience for evaluating more costly alternatives.

Here’s our goal: low-cost magnification capabilities for a Windows XP computing system. The underlying problem is for this Vision Loser to have available screen magnification when needed to complement self-voicing and screen reading software (a future post). I really want to know both what is (1) necessary and (2) sufficient to meet my vision needs, keeping mind that needs will change as vision changes. Change is as much daily, even hourly, variation as slower deterioration.

Well, how about that! Microsoft accessibility software includes a simple stationary magnifier with several levels of magnification and inversion of screen colors. Stationary means it doesn’t follow the mouse and it can be docked at one of the borders so it doesn’t move around. Indeed, I found I liked a stationary magnifier set to level 2, inverted, and docked at the top. The down-side is vertigo from the magnifier tracking the mouse. So, Only time and trial would show its sufficiency.

Enter the “{mouse”. and yes, we were talking about magnifiers, not pointers, or vermin! On a trip to a computer store, I decided to pick up a new wrist rest and a more comfortable mouse. By sheer luck, my niece shopper assistant pointed out a mouse with a magnifier. At home, I discovered that this little guy really is useful. It provides a “tracking” magnifier to complement the stationary Windows lens, again within levels of magnification and resize of the tracking box. Now, with a flick of an extra side button on the mouse, up came a magnifier aimed at the text I want to read. The product model is called a Microsoft Laser Mouse 5000, but these names and model numbers may have changed.

But, wait, what about the extra button capabilities that come with the mouse. Only the right side button, an extra sliver, is being used, to pop up the tracking magnifier. Wow, I have these other tools that read to me when I copy text to the clipboard (see previous post). I wonder if I can link these two. Indeed, the left side mouse button can be assigned to Select All and the Wheel button to Copy. Now with two clicks, I can hear a window of text. Cool! This save fumbling around the keyboard for Control-A then Control-C or a couple of trips down a context menu.

This is what computing folk call a “hack”, a clever way to get a job done, maybe not obvious or elegant but definitely effective. Indeed” OReilly Press has raised “hack” to a publishing genre, with piles of books that collect, explain, and propagate hacks for Amazon, Google, podcasting, even mental productivity.

There are always trade-offs in any system design. The first is that a solution only works if you remember to use it! That use must become part of your reflex repertoire But then you’re in trouble on a different computing system at a friend’s office or on a consulting gig. I forgot my mouse on a recent trip and walked over to a Staples to get a replacement, a smaller notebook mouse with a single side button magnifier. It worked right out of the box, but getting the thing released from its hard plastic covering required 2 hotel clerks and some dangerous instruments. Then, I really noticed the loss of select-copy functionality as I struggled under fluorescent lights and a nasty wireless security system. Further, to make my hack work, the Windows security system had to permit copy to clipboard, which many IT departments like to over-ride.

What if I want or need more magnification? Software like ZoomText is widely used (I hear from podcasts) and is designed especially for partially sighted people. A trial use early in my vision loss showed how many ways graphics could be adjusted to achieve magnification and contrast effects, with the primary benefit crisper text at higher levels of magnification Indeed, vision is so complicated – is it color, contrast, glare, font, or other factors that are most crippling to a particular Vision Loser? And, my vision changes so much, with lighting conditions, time of day, cumulative exposure, and who knows what other factors. In any case, the $500+ price tag was out of my budget at the time of trial.

What is the System Integration lesson? In “computational thinking” terms, we look for abstract interfaces of components, primarily their inputs and outputs. We don’t worry about the buttons or the user interface or menus but focus on the generic capability. In this example, the system clipboard is a (hidden) input to TextAloud (or similar tool that monitors the clipboard) and our MS Laser Mouse has a (hidden) output to copy selected text to the clipboard. Well, duh, the clipboard pervades Windows applications, but now we have endowed it with text-to-speech reading capabilities. We’ve wrapped a different way of thinking about the united capabilities of two separate components – a text reader application and a mouse.

When you put yourself in System Integrator mode, you ask: what’s my inventory of components? what are their abstract interfaces? how can I connect these applications together? How much complexity is added to my system by now having inter-linked components, e.g. when one is upgraded? What forms of training are now required, including getting used to, learning the foibles of, and gaining reflex control over the new capability? How do my solutions compare with each other and what are the trade-offs? Is there a show-stopper against or in favor of a particular solution?

One of the most serious lessons of the Software Engineering field, where I formerly taught, is the importance of getting the requirements right early on. That usually is not possible in our Vision Loser world, but rather we need to set up an experimental testbed where we can try out different ways of compensating for vision loss. Necessary and sufficient are always concerns, e.g. an expensive solution may be sufficient but not necessary while a low-cost solution may be necessary for some uses but insufficient for others.

Readers of this posting might be wondering: why not ask an expert? Well, I don’t have one handy, have never had computer rehab support from an employer or agency, and, frankly, have already had some unsatisfactory experiences with consumer low vision businesses. But really the experts are out there, telling me much good advice on podcasts and in accessibility publications. Thanks to them.

helpful podcasts and articles:

Access World comparison of magnification products Search (upper corner) for “Zoomtext, MAGIC, magnifiers”

Barrier-Free IT Tips and Tricks podcast on the Windows Accessibility Wizard

Literacy Questions for Magnification, Karen McCall from Carlin Communications
(link to be found)

OReilly “Hacks” Series

Microsoft Laser Mouse search for “Microsoft Laser Mouse” and “on screen magnifiers”

Seeing Through Google Book Search

Google’s blog recently announced the availability of Google Book Search with direct links in search results to out-of-copyright books for download as PDF. This action opens the portion of scanned books in the Google Library to print-disabled readers with traditional text-to-speech tools. However, this sub-collection is, by virtue of its vintage years, of value to only a few scholars and occasional readers. The remaining scanned books remain inaccessible in both their stored content and page images displayed in the book search results.

I decided to experiment to learn (1) what’s in the Google Library that relates to my professional and personal interests and (2) what could I actually get to expand my library suitable for my print-disabled status? The BLUF (Bottom Line Up Front): (1) I can access less that fully sighted professional colleagues and (2) this experiment didn’t yield any additions to my library.

Here’s the experiment: query Google Book Search on a topic I know something about and assess the value of the resulting books. The topic picked was one where, well, being immodest, I had myself published several articles in the 1980s and 1990s, using terms “software testing”, “formal specification”, “formal methods”. Working from memory, the book list showed books I’d previously owned, some I’d forgotten about, and a few I’d never known of. Several books were actually government publications, e.g. from NIST, and several were primarily reprints, many available from other electronic sources, such as the IEEE Digital Library. None appeared to be available in downloadable form, but I wasn’t sure what annotation would tell me that. I was at first confused about “full view” which did NOT mean downloadable but rather available for display as images in search results. The book lists were fairly long, between 50 and 100 books, indicating a comprehensive scanning or publisher contribution on my topics of interest.

So, what did I actually get to “see”? I was running Windows XP in its standard accessibility mode, with docked magnifier and narrator screen reader plus a simple zoomed magnifier associated with the Microsoft Laser Mouse; High Contrast Black Windows theme; Maxilla Firefox with images off, using FireVox free screen reader and TextAloud for reading page text. Of course, with images off I got a snippet of page text, a big empty block of missing image, and various book meta data, including where to buy or borrow. So, I turned images ON in the browser and, ouch, was it bright! I could recognize a page, almost read the bright text in inverted magnifier at size 4, but could not really glean much. Probably, more effective, and more costly, zoom could get more words into clarity, but there was no substitute for having the text read to me to gain context of the search result. This is the major point – there’s nothing in, around, or any way out of the image into screen readable mode. The image might as well have been a lake, a building, or porn for all the information I could glean from it. I wondered why the omnipotent Google toolbar, gathering data about my searches, and offering me various extra search information could not also be the reader.

Staring at the empty image was really disconcerting, even demoralizing. Were I still in the grant-grubbing, publication-hungry mode of an academic researcher, I would be disadvantage in getting paragraph-sized chunks of information to quote or cite (without ever handling the book itself). And these book references were to my own work, either citations or reprints of articles I’d written. Google Book Search does provide an excellent overview and snapshot of an era of research – who, what, and why – but not much more than a reminder for me. Of course, I could buy or borrow the book but then I’d need to scan it to get anything “readable”. Alternatively, running Google Scholar would lead me to many of the same resources in the Digital Libraries where I could buy or use a subscription to get the articles. Or, perhaps, a local employer or public library could get the article through Inter-library loan. It does appear that my search was biased toward retreiving as many reprint collections as books with original content, perhaps a side effect of computing literature publishing practices.

What about the promised downloadable content? I looked up “Wuthering Heights”, downloaded (link in upper corner) the PDF, and went through the usual screens of Adobe Reader updating itself. The, damn, ouch, another bright window of PDF I could not read. I remember Adobe had nicely provided an accessibility wizard buried down on the Help Menu and Read Out Loud on the View menu. After switching to more restful yellow on black of the PDF, I was able to hear the very interesting page of Google warnings about use of the book. Were I to actually read the book, I’d want it converted to text for downloading to BookPort reading in synthetic voice (good old “Precise Pete”) or in mp3 for an audio player. The PDF was, for me, just a format to get out of the way, although I could have read the book via Adobe’s Read Out Loud staying tethered to the PC. It would have been preferable to have by option the book in the DAISY format directly input to many text-to-speech tools, i.e. more standard than PDF.

Well, does Google Book Search do anything for this print-disabled person? I don’t think so.

These issues are discussed at more length in a 2005 white paper by Benetch/Bookshare founder Jim Fruchterman. He points out various ways that images might be annotated to stay within the conventions of web pages but the main problem is that publishers, Google, and some intermediaries need to cooperate to live up to the spirit of the legal rights of print-disabled people to access book content on a par with fully sighted individuals.

My wish is that Google would extend its toolbar to provide an audio from of the page for those who hold a certificate of print-disability similar to Bookshare’s policy. This would provide as much Entitlement as seems feasible for print-disabled, preserve rights to images, for deaf, and slightly raise Empowerment for print-disabled who can listen and make notes or do something else.

Another concern I’ve mentioned, and may have just missed in the page links, is the range of other options for some of the content in the books sampled in this experiment. Many reprints are available by Google Scholar, by the former NEC CiteSeer, from Digital Libraries of professional societies, and from the “database” collections of traditional library services. Is there a disconnect from Google Book Search to these alternative services?

Bottom Line: Getting a list of books discussing my topics is a good thing, but displaying ONLY page images was better for sighted users. And, I doubt my reading profile would identify benefits from downloadable full text of out-of-copyright books.

Well, this article has a definite “what’s in it for me?” tone, but I’d like to refrain that a tiny slice of the content here are words I wrote myself, receiving no compensation from publishers, only employers or research contracts. It’s ironic that I cannot enjoy going back to read myself what others have written about my work, or to continue the work with the same ease as sighted colleagues, nor get those empty images out of my mind. In the theme of this blog, Google Book Search is not classy use of technology, except in the “digital divide” sense of establishing different classes of users depending on their sigtht capabilities. I am not anti-Google, just disappointed.

Google Book Search

“Comments on Accessibility of Google Print”, white paper by Jim Fruchterman