Vision What do Vision Losers want to know about technology?

Hey, I’ve been off on a tangent from writing about adjusting to vision loss rather on a rant about and praise for website accessibility. Also absorbing my blogging efforts was a 2nd run of Sharing and Learning on the Social Web, a lifelong learning course. My main personal tutors remain the wise people of #a11y on Twitter and their endless supply of illuminating blog posts and opinions. You can track my fluctuating interests and activities on Twitter @slger123.

To get back in action on this blog, I thought the WordPress stat search terms might translate into a sort of FAQ or update on what I’ve learned recently. Below are subtopics suggested by my interpretations of the terms people used to reach this blog. Often inaccurately, some people searching for tidbits on movies or books called ‘twilight’ might be surprised to read a review of the memories of an elder gent battling macular degeneration in the 1980s. Too bad, but there are also people searching for personal experience losing vision and on technology for overcoming limitations of vision loss. These folks are my target audience who might benefit from my ramblings and research. By the way, comments or guest posts would be very welcome..

This post focuses on technology while the next post addresses more personal and social issues.

Technology Theme: synthetic speech, screen readers software, eBooks, talking ATM

Terms used to reach this blog

  • stuff for blind people
  • writing for screen readers
  • artificial digital voice mp3
  • non-visual reading strategies
  • book readers for people with legal blind
  • technology for people with a print-disability
  • apps for reading text
  • what are the best synthetic voices
  • maryanne wolf brain’s plasticity
  • reading on smart phones
  • disabled people using technology
  • synthetic voice of booksense
  • technology for legally blind students
  • audio reading devices
  • reading text application
  • synthetic speech in mobile device
  • the use of technology and loss of eyesight
  • installer of message turn into narrator

NVDA screen reader and its voices

    Specific terms on NVDA reaching this blog:

  • NVDA accessibility review
  • voices for nvda
  • nvda windows screen reader+festival tts 1
  • videos of non visual desktop access
  • lag in screen reader speaking keys
  • nvda education accessibility

Terminology: screen reader software provides audio feedback by synthetic voice to users operating primarily on a keyboard, announcing events, listing menus, and reading globs of text.

How is NVDA progressing as a tool for Vision Losers?
Very well with increased acceptance. NVDA (non Visual Desktop Access) is a free screen reader developing under an international project of innovative and energetic participants with support from Mozilla and Yahoo!. I use NVDA for all my web browsing and Windows work, although I probably spend more hours with nonPC devices like the Levelstar Icon for Twitter, email, news, RSS as well as bookSense and Bookport for reading and podcast listening. NVDA continues to be easy to install, responsive, gradually gaining capabilities like Flash and PDF, but occasionally choking from memory hog applications and heavy duty file transfers. Rarely do I think I’m failing from NVDA limitations but I must continually upgrade my skills and complaint about website accessibility (oops, there I go again). Go to:

The voice issue for NVDA is its default startup with a free open source synthesizer called eSpeak. The very flexible youngsters living with TTS (text-to-speech) their whole lives are fine with this responsive voice which can be carried anywhere on a memory stick and adapted for many languages. However, oldsters often suffer from Synthetic voice shock” and run away from the offensive voices. Now devices like Amazon Kindle and the iPod/iTouch gadgets use a Nuance-branded voice quality between eSpeak and even more natural voices from Neo Speech, ATT, and other vendors. Frankly, this senior citizen prefers older robotic style voices for book reading especially when managed by excellent firmware like Bookport Classic from APH. Here’s the deal: (1) give eSpeak a chance then (2) investigate better voices available at Voice and TextAloud Store at Look carefully at licensing as some voices work only with specific applications. The main thing to remember is that your brain can adapt to listening via TTS with some practice and then you’ll have a world of books, web pages, newspapers, etc. plus this marvelous screen reader.

Apple Mania effects on Vision Losers

Translation:What are the pro and con arguments for switching to Apple computers and handheld devices for their built in TTS?
Good question. Screenless Switcher is a movement of visually impaired people off PCs to Macs because the latest Mac OS offers VoiceOver text-to-speech built in. Moreover, the same capabilities are available on the iPhone, iTouch, and iPad, with different specific voices. Frankly, I don’t have experience to feel comfortable with VoiceOver nor knowledge of how many apps actually use the built-in capabilities. I’m just starting to use an iTouch (iPod Touch) solely for experimentation and evaluation. So far, I haven’t got the hang of it, drawing my training from podcasts demonstrating iPhone and iTouch. Although I consider myself skilled at using TTS and synthetic speech, I have trouble accurately understanding the voice on the iTouch, necessary to comfortably blend with gesturing around a tiny screen and, gulp, onscreen keyboard. There’s a chicken-and-egg problem here as I need enough apps and content to make the iTouch compelling to gain usage fluency but need more fluency and comfort to get the apps that might hook me. In other words, I’m suffering from mild synthetic voice shock compounded by gesture shyness and iTunes overload.

My biggest reservation is the iTunes strong hold on content and apps because iTunes is a royal mess and not entirely accessible on Windows, not to mention wanting to sell things I can get for free. Instead of iTunes, I get my podcasts in the Levelstar Icon RSS client and move them freely to other devices like the Booksense. Like many others with long Internet experrience, such as RSS creator and web tech critic Dave Winer, I am uncomfortable at Apple’s controlling content and applications and our very own materials, limiting users to consumers and not fostering their own creativity. Could I produce this blog on an iPad? I don’t know. Also, Apple’s very innovative approach to design doesn’t result in much help to the web as a whole where everybody is considered competitors rather than collaborators for Apple’s market share. Great company and products, but not compelling to me. The Google OS Android marketplace is more open and will rescue many apps also developed for Apple products but doesn’t seem to be yet accessible at a basic level or in available apps. Maybe 2010 is the year to just listen and learn while these devices and software and markets develop while I continue to live comfortably on my Windows PC, Icon Mobile Manager and docking station, and book readers. Oh, yeah, I’m also interested in Gnome accessibility, but that’s a future story.

The glorious talking ATM

Terms used to reach this blog

  • talking ATM instructions
  • security features for blind in ATM

What could be more liberating than to walk up to a bank ATM and transact your business even if you cannot see the screen? Well, this is happening many locations and is an example for the next stage of independence: store checkout systems. Here’s my experience. Someone from the bank or experienced user needs to show you where and how to insert your card and ear buds plug. After that the ATM should provide instructions on voice adjustment and menu operations. You won’t be popular if you practice first time at a busy location or time of day, but after that you should be as fast as anybody fumbling around from inside a car or just walking by. Two pieces of advice: (1) pay particular attention to CANCEL so you can get away gracefully at any moment and (2) always remove ear buds before striding off with your cash. I’ve had a few problems: an out of paper or mis-feed doesn’t deliver a requested receipt, the insert card protocol changed from inline and hold to insert and remove, an unwanted offer of a credit card delayed transaction completion, and it’s hard to tell when a station is completely offline. I’ve also dropped the card, sent my cane rolling under a car, and been recorded in profanity and gestures by the surveillance camera. My biggest security concern, given the usual afternoon traffic in the ATM parking lot, is the failure to eject or catch a receipt, which I no longer request. But overall, conquering the ATM is a great step for any Vision Loser. It would also work for MP3 addicts who cannot see the screen on a sunny day.

Using WordPress</h4



  • Wordpress blogging platform accessibility >

  • wordpress widget for visual impaired

Translation: (1) Does WordPress have a widget for blog readers with vision impairments, e.g. to increase contrast or text size? (2) Does WordPress editing have adjustments for bloggers with vision impairment?

(2) Yes, ‘screen settings’ provides alternative modes of interaction, e.g. drag and drop uses a combo to indicate position in a selected navigation bar. In general, although each blog post has many panels of editing, e.g. for tags, title, text, visibility, etc. these are arranged in groups often collapsed until clicked for editing, if needed. Parts of the page are labeled with headings (yay, H2, H3,…) that enable a blog writer with a screen reader to navigate rapidly around the page. Overall, good job, WordPress!

However, (1) blog reader accessibility is a bit more problematic. My twitter community often asks for the most accessible theme but doesn’t seem to converge on an answer. Using myself as tester, I find WordPress blogs easy to navigate by headings and links using the NVDA screen reader. But I’m not reading by eyesight so cannot tell how well my own blog looks to either sighted people or ones adjusting fonts and contrasts. Any feedback would be appreciated, but so far no complaints. Frankly, I think blogs as posts separated by headings are ideal for screen reading and better than scrolling if articles are long, like mine. Sighted people don’t grok the semantics of H2 for posts, h3, etc. for subsections, etc. My pet peeve is themes that place long navigation sidebars *before* the contnent rather than to the right. When using a screen reader I need to bypass these and the situation is even worse when the page downloads as a post to my RSS clinet. So, recommendation on WordPress theme: 2 column with content preceding navigation, except for header title and About.

Books. iBooks, eBooks, Kindle, Google Book Search, DAISY, etc.


  • kindle+accessibility
  • how to snapshot page in google book
  • is kindle suitable for the visually impaired?
  • how to unlock books “from kindle” 1
  • is a kindle good for partially blind peo 1
  • access ability of the kindle

I’ll return to this broad term of readers and reading in a later post. Meantime, here’s an Nytimes Op article on life cycle and ecosystem costs of print and electronic books. My concern is that getting a book into one’s sensory system, whether by vision or audio, is only the first step in reading any material. I’m working on a checklist for choices and evaluation of qualities of reading. More later.

Searching deeper into Google using the Controversy Discovery Engine

You know how the first several results from a Google search are often institutions promoting products or summaries from top ranked websites? These are often helpful but even more useful, substantive, and controversial aspects may be pushed far down in the search list pages. There’s a way to bring these more analytic pages to the surface by easily extending the search terms with words that rarely appear in promotional articles, terms that revolve around controversy and evidence. Controversy Discovery engine assists this expanded searching. Just type in the term as you would to Google and choose from one or both lists of synonym clusters to add to the term. The magic here is nothing more than asking for more detailed and analytic language in the search results. You are free to download this page to your own desktop to avoid any additional tracking of search results through its host site and to have it available any time or if you want to modify its lexicon of synonyms.
Some examples:

  1. “print disability” + dispute
  2. “legally blind” + evidence Search
  3. “NVDA screen reader” + research Search
  4. “white cane” + opinion Search
  5. “Amazon Kindle” accessibility + controversy Search

    Feedback would be much appreciated if you find this deeper search useful.

    Adjustment themes: canes, orientation and mobility, accessibility advocacy, social media, voting, resilience, memories, …

    Coming in next post!


Grafting web accessibility onto computer science education

Note: this is a long post with webliography in the next article.
There is also a recorded tour of CS web sites as an MP3 download.

Understanding web accessibility through computational Thinking

This post is written for distribution during the first proclaimed National computer science education week, December 7, 2009. My goal is to stimulate awareness within the CSE community of the importance of web and software accessibility to society at large and to the proper development of associated skills within CS curricula. Taking this further, I offer a call to action to renovate our own websites for purposes of (1) improved service, (2) learning and practice, and (3) dissemination of lessons learned to other academic entities, including professional organizations.

recognizing that traditional, accredited CS curricula do not define a role for accessibility, I suggest actions that can be grafted into courses as exercises, readings, debates, and projects. To even more legitimize and improve uptake of accessibility, many of these problems can be cast as computational Thinking in the framework of drivers from society, technology, and science.

Definitions and Caveats

Caveat: I do not represent the blindness communities, standards groups, or any funding agency.
Also, I limit this accessibility context to the USA and visual impairment disability.

here is my personal definition framework:

  • Definition: disability = inability to independently perform daily living tasks due to physical or mental causes

    example: I cannot usually read print in books or news, nor text on a computer screen at size 14

    Example: I cannot usually follow a mouse cursor to a button or line of text to edit

  • Definition: Assistive Technology (AT) = hardware or software that overcomes some limits of a disability

    example: A screen magnifier can track a mouse cursor then smooth and enlarge text in the cursor region

    Example: A screen reader can announce screen events and read text using synthetic speech

  • Definition: Accessibility = Quality of hardware and software to (1) enable assistive technology and also (2) support the AT user to the full extent of their skills without unnecessary expenditure of personal energy

    example: A web page that enables focus through keyboard events enables a screen reader to assist a user to operate the page with ease, provided hands are working. Same is true for sighted users.

    Example A screen magnifier enables reading text and screen objects but at such a low rate that I cannot accomplish much usual work:

    Note: I am conflating accessibility with usability here, with usability usually referring beyond disabilities. Informally, to me, “accessibility” means my screen reader is fully operational, not in the way, and there are no reasons I cannot achieve the goal of page success as well as anybody.

  • Definition: Accommodation = explicit human decisions and actions to accomplish accessibility

    Example: Modifying a web page enhances comprehension for a screen reader user, see POSH computational thinking below

    Ecxample: Adapting security settings on a PC to permit a job applicant with a screen reader on a pen drive to read instructions and complete tests and forms

    Example: A curb cut in a sidewalk enables wheelchairs to moor easily cross streets. Also true for baby strollers, inattentive pedestrians, visually impaired, luggage carts, skateboards, etc.

I base my analysis and recommendations on several domains of knowledge:

  • Learning and acquisition of skills as a recent vision Loser, becoming “print disabled”, “legally blind”, now at an intermediate skill level

  • Computer scientist, active for decades in formal methods and testing, highly related to “computational thinking” with broader professional experience in design methods and technology transfer.

  • Intermittent computer science and software engineering educator at undergraduate and master’s level programs with experience and opinions on accreditation, course contents, student projects, and associated research

  • Accelerated self-study and survival training from the community of persons with disabilities, the industry and professions serving them, and the means for activism based in social media like twitter, blogs, and podcasts

  • Lingering awareness of my own failings before my vision loss, including software without accessibility hooks, web pages lacking structural/semantic markup, and , worst of all, omission of accessibility considerations from most courses and projects. My personal glass house lies in slivers around me as I shout “if only I knew then, when I was professionally active, what I know now, as a semi-retiree living with the consequences and continuing failures of my profession.

what is “computational thinking” and what does it have to do with accessibility?

This term was coined by dr. Jeannette wing in a 2006 article, and best expressed in her
Royal society presentation and podcast conversations. for our purposes, CT asks for more precise description of abstractions used in assistive technology, web design, and mainstream browsers, etc. The gold standard of web accessibility for my personal kind of disability, shared with millions of Americans, is the bottom line of reading and interacting with web sites as well as currently normally sighted persons. To an amazing degree, audio and hearing replaces pixels and seeing provided designs do support cooperation of assistive technology at both primitive levels and costs for effort expended. I’ll illustrate some fledgling computational thinking in a later section and by touring CS and other websites, but, sorry, this won’t be a very pleasant experience for either me the performer or listeners.

CSE can benefit from the more rigorous application of CT to meet its societal obligations while opening up new areas of research in science and technology leading to more universal designs for everybody. To emphasize, however, this is not a venture requiring more research before vast improvements can be achieved, but rather a challenge to educators to take ownership and produce more aware computing professionals. …

Driving forces of society, Technology, and science

Here’s a summary of trends and issues worthy of attention within CSE and suggested actions that might be grafted appropriately.

driving forces from society

computer science education has a knowledge gap regarding accessibility

As excellently argued in a course description “Accessibility First”, web design in general, accessibility, and assistive technology are at best service learning or research specialties falling under human computer interface or robotics. where do Cs students gain exposure to human differences, the ethics of producing and managing systems usable by everybody, and the challenges of exploring design spaces with universal intentions.

The extensive webliography below offers the best examples I could find, so please add others as comments. Note that I do not reference digital libraries because (1) the major ACM Portal is accessibility deficient itself and (2) I object to the practice of professional contributions being available only at a charge. The practice of professional society control over publications forces a gulf between academic researchers and a vibrant community of practitioners, including designers, tool builders, accessibility consultants and activists.

Action: Use the above definition framework to describe the characteristics of the following as ordinary or assistive: keyboards, tablets with stylus, onscreen keyboard, mouse, screens, fonts, gestures, etc. How do these interfaces serve (1) product developers and (2) product users? Where is the line between assistive and mainstream technology?

Action: see the proposed expansion of the National computer Science education proclamation in our conclusions. Debate the merits of both the whereas assumptions the therefore call to action. Are these already principles adopted and practiced within CSE?

Disability is so prevalent that accessibility is a uniform product requirement.

Being disabled is common, an estimated 15% of U.S.A. population with serious enough visual impairment to require adjustments from sites designed assuming full capabilities of acuity, contrast, and color. Eyesight changes are inevitable throughout life, even without underlying conditions such as macular degeneration or severe myopia. Visual abilities vary also with ambient conditions such as lighting, glare, and now size and brightness of small screens on mobile devices. considering other impairments, a broken arm, carpal tunnel injury, or muscle weakness give a different appreciation for interaction with a mouse, keyboard, or touch screen. As often said, we will all be disabled some way if we live long enough. Understanding of human differences is essential to production of good software, hardware, and documentation. Luckily, there are increasingly more specimens, like me, willing to expose and explain my differing abilities and a vast library of demonstrations recorded in podcasts and videos.

Action: view You tube videos such as the blind web designer using a screen reader explaining the importance of headings on web pages. Summarize the differences in how he operates from currently sighted web users. How expensive is the use of Headings? See more later in our discussion of CT for Headings.

Action: visit or invite the professionals from your organization’s Disability services, Learning center, or whatever it is called. These specialists can explain disabilities, assistive technology, educational adjustments, and legal requirements.

Action: Is accessibility for everybody, everywhere, all the time a reasonable requirement? What are the ethics and tradeoffs of a decision against accommodation? What are the responsibilities of those requiring accommodations?

The ‘curb cut’ principle suggests how accessibility is better for everyone

Curb cuts for wheelchairs also guide blind persons into street crossings and prevent accidents for baby strollers, bicyclists, skateboarders, and inattentive walkers. The “curb cuts” principle is that removing a barrier for persons with disabilities improves the situation for everybody. This hypothesis suggests erasing the line that labels some technologies as assistive and certain practices as accessibility to maximize the benefits for future users of all computer-enabled devices. This paradigm requires a new theory of design that recognizes accessibility flaws as unexplored areas of the design space, potential harbingers of complexity and quality loss, plus opportunities for innovation in architectures and interfaces. Additionally, web accessibility ennobles our profession and is just good for business.

Action: List physical barriers and adaptations in your vicinity, not only curb cuts, but signage, safety signals, and personal helpers. Identify how these accommodate people with canes, wheelchairs, service animals, etc. And also identify ways these are either helpful or hampering individuals without disabilities. Look at settings of computers and media used by instructors in classrooms. Maybe a scavenger hunt is a good way to collect empirical physical information and heighten awareness.

Action: Identify assistive technology and accessibility techniques that are also useful for reasons different from accessibility? e.g. A keyboard enabled web page or browser tabs support power users.

Persons with disabilities assert their civil rights to improve technology.

while most of us dislike lawsuits and lawyers, laws are continuously tested and updated to deal with conflicts, omissions, and harm. Often these are great educational opportunities on both the challenges of living with disabilities and the engineering modifications, sometimes minor, for accommodations. Commercial websites like amazon, iTunes, the Law School aptitude test, small business administration, and Target are forcefully reminded that customers are driven away by inaccessibility of graphics, menus, forms, and shopping carts. Conversely, recently, I had a quick and easy checkout from a Yahoo small business website, greatly raising my respect and future return likelihood whenever I see the product vendor and website provider.

Devices such as controllers on communication systems, the amazon Kindle, and new software like google WAVE and chrome browser often launch with only accessibility promises, excluding offensively and missing feedback opportunities from persons with disabilities. Over and over, it is shown that the proverbial software rule of increasing cost of fixing missing requirements late is exemplified by accessibility, whether legal or business motivated. While a lawsuit can amazingly accelerate accessibility, companies with vast resources like Microsoft, Oracle, blackboard, and google are now pitted in accessibility races with Yahoo, apple, and others. The bar is rapidly being raised by activism and innovation.

for many The social good of enabling equal access to computing is an attractor to a field renowned for nerds and greed. Social entrepreneurs offer an expansive sense of opening doors to not only education and entertainment but also employment, that now stands around 20% for disabled persons. Many innovative nonprofit organizations take  advantage of copyright exemptions building libraries and technology aids for alternatives to print and traditional reading.  

The computing curb cuts principle can motivate professionals, services, and end users to achieve the potential beauty and magic of computing in everyday life, globally, and for everybody who will eventually make the transition into some form of sensory, motor, or mental deficiency. But, first, mainstream computing must open its knowledge and career paths to encompass the visionaries and advances now segregated. All too often persons with disabilities are more advanced, diversified, and skill full in ways that could benefit not yet disabled people.

Action: The ubiquitous bank ATM offers a well documented ten year case study of how mediation led to a great improvement in independent living. for visually impaired people. Take those ear buds out of the MP3 player and try them on a local ATM, asking for service help if needed or ATM is not voice enabled. Using a voice enabled ATM also provides insight into the far more problematic area of electronic voting systems.

the amazon Kindle lawsuit by blind advocates against universities considering, or rejecting, the
device and its textbook market provides a good subject for debate.

Action: On the home front, pedagogical advances claimed for visual programming languages like Alice are not equally available to visually impaired students and teachers. first, is this a true assertion? How does this situation fit the definition of equal or equivalent access to educational opportunities? should the platform and implementation be redone for accessibility? Note: I’ve personally seen a student rapidly learn OO concepts and sat in on Cs1 courses with Alice, but I am totally helpless with only a bright, silent blob on the screen after download. Yes, I’ve spoken to SIGCSE and Alice personnel, suggested accessibility options, but never received a response on what happens to the blind student who signs up for an Alice-based CS course. Please comment if you have relevant experience with accommodations and Alice or other direct manipulation techniques.

The Web has evolved a strong set of standards and community of supporters.

W3c led efforts are now at 2.0 with an evolved suite of standards products, including documents, validator’s, and design tools. standards go a long way enabling accessibility by both their prescriptions and rationales, often drawing on scientific principles, such as color perception. but the essence of web standards is to define the contracts among browsers and related web technologies that enables designers to predict the appearance of and interaction with their designed sites and pages. The theme of WCAG 2.0 sums up as Perceivable, Operable, Understandable, and Robust. we all owe a debt to the Web standards Mafia for their technical contributions, forceful advocacy to vendors, and extensive continuing education.

Web standards are sufficiently mature, socially necessary, and business worthy that open, grassroots motivated curricula are being defined. CSE people who understand CT may well be able to contribute to this effort uniquely. In any case, questions about the relationship of tradition CS education and this independent curriculum movement must be addressed considering the large workforce of web designers, including accessibility specialists. Furthermore, web design inherently requires close designer and client communication, making it difficult to offshore into different culture settings.

Action: Use the #accessibility and #a11y hash tags on twitter to track the latest community discussions, mostly presented in blogs and podcasts. Pick a problem, like data tables, to learn the accessibility issues from these experts. find and create good and bad examples, but note you may need a screen reader software for this. can you characterize the alternatives and tradeoffs in CT terms?

Action: Create or try some web page features in several different browsers. Notice the differences in appearance and operation. Which sections of WCAG apply to noticeable differences or similarities?

Action: What is the career connection of computer science and web design? What are the demographics, salary, portability, and other qualities of web design versus traditional CS and SE jobs?

Transparency and dissemination of federal government data is drawing attention to accessibility

First, a remodeled drew accolades and criticisms. New websites like and appeared to reinforce the Obama administration promises. showed up on my radar screen through its Twitter flow. All these web sources, are now in my RSS feed reading regime. But the websites seem to be still behind on some aspects of accessibility, and under scrutiny by activists, including me. Personally, I’d be satisfied with a common form for requesting data and services, not the elements itself but well evolved interaction patterns through feedback and validation. More importantly, the data sets and analyses are challenging for visually impaired people, suggesting even new scientific research and novel technology to utilize alterative non-visual senses and brain power.

Additionally, innovation in assistive technology and accessibility is recognized at the National Center for Technology Innovation, with emphasis on portability and convergence with mainstream technology. Indeed, apparently, there are stimulus funds available in education and in communication systems.

Action: Visit the various USG cabinet department websites and then write down your main perception of their quality and ability to answer questions.

Action: Find examples of USG website forms users fill out for contacts, download of data sets, mailing lists, etc. How easy is filling out the forms> what mistakes do you make? How long does each take? Which forms are best and worst?

Check out on whether any stimulus funds are being spent on assistive technology. Or perhaps that information is on Deptart of Education sites as plans or solicitations.

Mainstream and assistive technologies are beginning to cross over.

BusinessWeek notes a number of examples:
Clearly mobile devices are driving this change. Embedding VoiceOver in Mac OS, transferred then to products like IPod Touch, has motivated a number of blind “screenless switchers”. Google calls its version on Android “eyes-free”. For those long stuck in the “blindness ghetto” of products costing $1000s with small company support and marketing chains through disability support service purveyors, this is a big deal. Conversely, although limited by terms of amendment under the Chafee agreement, members of Bookshare have enjoyed access to a rapidly growing library of texts, really XML documents, read in synthetic speech by now pocket size devices than cross Kindle and IPod capabilities. There’s never been a better time to lose some vision if one is a technology adopter willing to spend off retirement funds to remain active and well informed. The aging baby boomer generation that drives USA cost concerns will be a vast market in need of keeping up with the government flow of information, electronic documentation, not to mention younger generations.

But, while this Vision Loser is happy with the technology trend, to those disabled around the world working with older or non-existent computing environments this and free, open source trends make truly life changing differences.

Action: What are the job qualifications for working in the areas of assistive technology and accessibility? Is this business are growing, and in what regions of the USA or the world?

Technology drivers

social media opens the culture of disability and the assistive markets for all computing professionals to explore.

while the cultures of disability may operate separate systems of societies and websites, in the case of vision impairment, the resources are right there for everybody to learn from, primarily by demos disseminated as podcasts by blind cool Tech, accessible world, and vendors. several annual conferences feature free exhibit halls visited by disability professionals, independent disabled like me, and luminaries like stevie wonder. cSUN is the biggest and a good place to get vendor and product lists. Again, many products can be seen in local disability support services. Local computer societies and CS courses may find well equipped people who can present like my Using things that Talk. This is a vibrant world of marketing closely couple with users, highly professional demos, and innovative developers, often disabled themselves. I personally treasure shaking hands with and thanking the young blind guys behind my Levelstar Icon and NVDA screen readers. Also, mailing lists are to various degrees helpful to the newly disabled, and rarely particular about age and gender. it’s a great technology culture to be forced into.

Action: Whenever you’re in a large enough city, visit their local vision training centers. I think you’ll be welcome, and might leave as a volunteer.

Action: With well over a thousand podcasts, dozens of blogs, and a regular tweet stream, the entry points for learning are abundant. However, the terminology and styles of presenters and presentations vary widely. Consider an example, often used in computer science, like David Harel’s watch, the microwave oven, or elevator controller. How do the state diagrams manifest in speech interfaces? Can you reverse engineer device descriptions using computational thinking? How could this help disabled users or accessibility providers?

Text-to-speech (TTS) is a mature technology with commodity voices.

Screen reader users rely on software implemented speech engines which use data files of word-to-sound mappings, i.e. voices. built into Mac Os, and widely available in windows and Linux, this mature technology supports a marketplace of voices available in open source or purchased with varying degrees of licensing, at a cost of about $25. comparable engines and voices are the main output channel of mobile assistive devices, like now I type on the Levelstar Icon. web pages, books, dialogs, email, … reading is all in our mind through our ears, not our eyes. An amazing and not yet widely appreciated breakthrough of a lineage of speech pioneers dating back to 1939 through DecTalk ATT Natural voices and now interactions with voice


Action: Wikipedia has a great chronology and description of synthetic speech. Track this with Moore’s law and the changes of technology over decades.

Action: Compare synthetic voices, e.g. using samples from vendor or the ‘As Your World Changes’ blog samples.

Processor and storage enable more and more talking devices. why not everything?

Alarm clocks, microwave ovens, thermostats, and
many more everyday objects are speech enabled to some degree, see the demos on blind cool Tech and accessible world. I carry my library of 1000+ books everywhere in a candy bar sized screen-less device. but why stop until these devices are wirelessly connected with meaningful contextual networks. Thermostats could relay information about climate and weather trends, power company and power grid situations, and feedback on settings and recommended adjustments. Devices can carry their own manuals and training.

Action: Listen to podcasts on blind cool Tech and accessible world about talking devices and how they are in use by visually impaired people. Reverse engineer the devices into state machines, use cases, and write conversations between devices and users in “natural language”, assuming ease of speech output.

Action: Inventory some devices that might be redesigned for talking, even talkative. Electrical or chemical laboratory instruments, medical devices, home appliances, cars and other moving things, etc. But what would these devices speak? How do they avoid noise pollution? interference? annoyance?

Action: Computer science researchers are great at devising advanced solutions that provide service to relatively few disabled persons. For example, I have no use of GPS because if I’m somewhere I don’t know, I’m in bigger trouble than needing coordinates. This would b different in a city with public transportation, maybe. How do we evaluate technology solutions with the user, not the technology purveyor, as the main beneficiary?

Pivotal technology for visually impaired, the screen reader, is rapidly evolving through open source

A screen reader doesn’t really read pixels but rather the interfaces and objects in the browser and desktop. GUI objects expose their behaviors and properties for the screen reader to read and operate via TTS. Listen to the demos of Cs websites you may be familiar with. Unfortunately the marketplace for screen readers has been priced at over $1000 with steep SMA updates and limits in trials and distribution. Products largely sold to rehab and disability services passed on to users, with limited sales to individuals. This is a killer situation for older adults who find themselves needing assistance but without the social services available to veterans, students, and employee mandated. Worse, product patents are being employed by lawyers and company owners (some non USA) as competitive lawsuits.

however, the world has changed with the development over the past few years of NVDA, Non visual desktop access, originating in Australia with grants from Mozilla, then yahoo and Microsoft. A worldwide user community adapts NVDA for locale and Tts languages, with constant feedback to core developers. gradually, through both modern languages (Python) and browser developer collaborations, NVDA is challenging the market. You can’t beat free, portable, and easily installed if the product works well enough, as NVDA has for me since 2007. It’s fun to watch and support an agile upstart, as the industry is constantly changing with new web technologies like ARIA. The main problem with NVDA is robustness in the competing pools for memory resources and inevitable Windows restarts and unwanted updates.

Action: download and install NVDA. Listen to demos to learn its use. You will probably need to upgrade TTS voices from its distributed, also open, Espeak.

Action: learn how to test web pages with NVDA, with tutorials available from Webaim and Firefox. Define testing criteria (see standards) and processes. Note: good area here for new educational material, building on CS and SE testing theories and practices.

Action: develop testing practices, tools, and theories for NVDA itself. since screen readers are abstraction oriented, CT rigor could help.

Action: Modify NVDA to provide complexity and cost information. Is there a Magic Metric that NVDA could apply to determine with, say 80% agreement with visually impaired users, that a page was OK, DoOver, or of questionable quality in some respect?

structured text enables book and news reading in a variety of devices..

DAISY is a specification widely implemented to represent books, newspapers, magazines, manuals, etc. Although few documents fully exploit its structuring capabilities, in principle, a hierarchy of levels with headings allows rapid navigation of large textual objects. for example, the Sunday NY Times, has 20 sections, editorials, automobiles, obituaries, etc. separated into articles. Reading involves arrowing to interesting sections, selecting articles, listening in TTS until end of article or nauseous click to next article. books arrive as folders of size usually less than 1 MB. reader devices and software manage bookmarks, possibly in recorded voice, and last stopping point, causes by user action or sleep timer. In addition to audible and National narrated reading services with DRM, The TTS reading regime offers a rich world from 60,000+ books contributed by volunteers and publishers to bookshare and soon over 1M DAISY formatted public books through
These are not directly web accessibility capabilities as in browsers but rather do read HTML as text, support RS’s reading of articles on blogs, and include browsers with certain limits, as in no Flash.
Over time, these devices contribute to improved speech synthesis for use everywhere, including replacement of human voice organs. Steven Hawking, blogger heroine ‘left thumbed blogger’ Glenda with cerebral palsy, and others use computer and mobile devices to simply communicate speech.

Action: Listen to podcasts demos of devices like Icon, booksense, Plextalk, Victor stream. What capabilities make reading possible, tolerable, or pleasant? Voice, speed, flexibility, cost, access, …?

Accessibility tools are available, corresponding to static analyzers and style checkers for code.

While not uniformly agreeing, accurate, or helpful, standards groups provide online validator’s to “test” accessibility. For example, WAVE from, marks up a page with comments derived from web standards guidelines, like “problematic link”, “unmatched brackets”, java script interactions (if java script disabled), header outline anomalies, missing graphic explanations, small or invisible text. It’s easy to use this checker, just fill in the URL. However, interpreting results takes some skill and knowledge. Just as with a static analyzer, there are false hits, warnings where the real problem is elsewhere, and a tendency to drive developers into details that miss the main flaws. Passing with clean marks is also not sufficient as a page may still be overly complex or incomprehensible.

Action: Below is a list of websites from my recorded tour. Copy the link into WAVE (not the Google one) and track the markup and messages to my complaints or other problems. show how you would redesign the page, if necessary, using this feedback.

Action: redesign the ACM digital library and portal in a shadow website to show how a modern use of structured HTML would help.

Action: consider alternatives to PDF delivery formats. Would articles be more or less usable in DAISY?

Action: design suites of use cases for alternative digital libraries of computer science content. which library or search engine is most cost effective for maintenance and users?

science drivers

Understanding of brain plasticity suggests new ways of managing disabilities

Brain science should explain the unexpected effectiveness and pleasure of reading without vision.

My personal story. Although I was experimenting with TTS reading of web pages, I had little appreciation, probably induced by denial, of how I could ever read books or long articles in their entirety. since it was
only a few weeks after I gave up on my Newsweek and reading on archetypes until my retina specialist pronounced me beyond the acuity level of legal blindness, I only briefly flirted with magnifiers, the trade of low vision specialists. rather, upon advice of another legally blind professional I met through her book and podcasts interviews, I immediately joined the wonderful nonprofit A few trials with some very good synthetic voices and clunky PC-based software book readers lead me to the best at that time handheld device, the Bookport from APH, American Printing House for the blind. within weeks, I was scouring bookshare, then around 20,000 volumes, for my favorite authors and, wonders be, best sellers to download to my bookport. At first, I abhorred the synthetic voice, but if that was all that stood between me and regular reading, I could grow to love old precious Paul. going on 4 years, 2 GB of books, and a spare of the discontinued bookport, I still risk strangulation from ear buds at night with bookport beside me. Two book clubs broadened my reading into deeper unfamiliar nonfiction terrain and the Levelstar Icon became my main retriever from bookshare, now up to 60,000 volumes with many teenage series and nationally available school textbooks. I tell this story not only to encourage others losing vision, but also as a testimonial to the fact that I I am totally and continually amazed and appreciative that my brain morphed so easily from visual reading of printed books to TTS renditions in older robotic style voices. I really don’t believe my brain knows the difference about plot, characters, and details with the exception of difficult proper names and tables of data (more later). Neuroscientists and educators write books about the evolution of print but rarely delve into these questions of effectiveness and pleasure of pure reading by TTS. The best exceptional research is Clifford Nass A ‘wire for speech’ on how our brains react to gender, ethnicity, age, emotion, and other factors of synthetic speech. such a fascinating topic!

Action: Listen to some of the samples of synthetic speech on my website, e.g. the blockbuster ‘Lost symbol’ sample. Which voices affect your understanding of the content? How much do you absorb compared with reading the text sample? Extrapolate into reading the whole book using the voices you prefer, or can tolerate, and consider how you might appreciate the book plot, characters, and scenery Do you prefer male or female voices? Why?.

Numerical literacy is an open challenge for visual disability.

I personally encountered this problem trying to discuss a retirement report based around asset allocations expressed in pie charts. Now, I understand charts well, even programmed a chart tool. But I could find no way to replace the fluency of seeing a pie chart by reading the equivalent data in a table. This form of literacy, a form of numeracy, needs more work in the area of Trans-literacy, using multiple forms of perception and mental reasoning. Yes, a pie chart can be rendered in tactile form, like Braille pin devices, but these are still expensive. Sound can convey some properties, but these depend on good hearing and a different part of the brain. Personally, I’d like to experiment with a widget operated by keyboard, primarily arrow keys, that also read numbers with different pitches, voices, volume, or other parameters. The escalating sound of a progress bar is available in my screen reader, for example. Is there a composite survey somewhere of alternative senses and brain training to replace reading charts? Could this be available in the mainstream technology market? How many disabilities or educational deficiencies of education and training might also be addressed in otherwise not disabled people?
Is there an app for that?

Action: Inventory graphical examples where data tables or other structures provide sufficient alternatives to charts? Prototype a keyboard-driven, speech-enabled widget for interaction with chart like representations of data. Thank you for using me as a test subject.

Action: Moving from charts to general diagrams, how can blind students learn equivalent data structures like lists, graphs, state machines, etc.?

Web science needs accessibility criteria and vice versa.

The web is a vast system of artifacts, of varying ages,
HTML generations, human and software generated, important, etc. could current site and page accessibility evaluation scale to billions of pages in a sweep of accessibility improvement?
Surveys currently profile how screen readers are used and the distribution of HTML element usage.

Do a web search, in bing, Yahoo, google, or dogpile, whatever, and you’ll probably find a satisficing page , and a lot you wish not to visit or never visit again. Multiply that effort by , say 10, for every page that’s poorly designed or inaccessible to consider the search experience of the visually impaired. Suppose also that the design flaws that count as accessibility failures also manifest as stumbles or confusion for newer or less experience searchers. Now consider the failure rate of serious flaws of, , say, 90% of all pages. Whew, there’s a lot of barriers and waste in them there web sites.

experienced accessibility analysts , like found on webAxe podcasts and blog, can sort out good, bad, and just problematic features. Automated validation tools can point out many outright problems and hint at deeper design troubles.

Let’s up the level and assume we could triage the whole web, yep, all billions of pages as matched with experimental results of real evaluators, say visually impaired web heads like me and those accessibility experts. This magic metric, MM, has three levels: OK, no show stoppers by human evaluators; at 80% agreement; DO OVER, again with human evaluators 80% agreement of awfulness; and remaining requiring reconciliation of human and metric. Suppose an independent crawler or search engine robot used this MM to tag sites and pages. probably nothing would happen. but if…

Action: declare a week of clean Up the web, where the MM invokes real Acton to perform “do over” or “reconcile”. Now, we’re paying attention to design factors that really matter and instigating serious design thought. All good, all we need is that MM.

Action: which profession produces the most accessible pages, services, and sites? computer scientists seem to be consistently remiss on headings, but are chemists or literary analysts any better? If is as bad as I claim, are other professional societies more concerned about quality of service to their members? what are they doing the same or differently?
How does the quality of accessibility affect the science of design as applied to web pages, sites, and applications?

Accessibility needs a Science of Design and Vice Versa

Accessibility concerns often lead into productive unexplored design regions.
Accessibility and usability are well defined if underused principles of product quality.  The ‘curb cuts’ principle suggests that a defect with respect to these qualities is in a poorly understood or unexplored area of a design. Often  a problem that presents only a little trouble for the expected “normal” user is a major hassle or show stopper for those with certain physical or cognitive deficiencies. However, those flaws compound and often invisibly reduce productivity for all users. Increasingly, these deficiencies arise from ambient environmental conditions such as glare, noise, and potential damage to users or devices.

Moreover, these problems may also indicate major flaws related to the integrity of a design and long term maintainability of the product. An example is the omission of Headings on an HTML page that makes it difficult to find content and navigation divisions with a screen reader. This flaw usually reveals an underlying lack of clarity about the purpose and structure of the website and page. Complexity and difficult usability often arise from missing and muddled use cases. Attitudes opposing checklist standards often lead to perpetuating poor practices such as the silly link label “click here”.

The ‘curb cuts’ principle leads toward a theory of design that  requires remedy of accessibility problems not as a kindness to users nor to meet a governmental regulation but rather to force exploration through difficult or novel parts of the design terrain. The paradigm of “universal design” demands attention to principles that should influence requirements, choice of technical frameworks, and attention to different aesthetics and other qualities.   For example, design principles may address  where responsibilities lie for speech information to a user, thus questioning whether alternative architectures should be considered. Applying this principle early and thoroughly potentially removes many warts of the product that now require clumsy and expensive accessibility grafts or do-overs.

Just as the design patterns movement grew from the architectural interests of Christopher Alexander, attention to universal design should help mature the fields for software and hardware. The “curb cuts” principle motivates designers to think beyond the trim looking curb to consider the functionality to really serve and attract ever more populations of end users.

The accessibility call for action, accommodation, translates into a different search space and broader criteria plus a more ethically or economically focused trade-off analysis. now, design is rarely explicitly exploration, criterion’s, or tradeoff-focused. but the qualitative questions of inclusive design often jolt designers into broader consider of design alternatives. web standards such as WCAG 2.0 provide ways to prune alternatives as well as generate generally accepted good alternatives. It’s that simple: stay within the rules, stray only if you understand the rationales for these rules, and temper trade-off analysis with empathy toward excluded users or hard cool acceptance of lost buyer or admirers. well, that’s not really so simple, but expresses why web standards groups are so important and helpful — pruning, generating, and rationalizing is their contribution to web designers professional effectiveness and peace of mind.

Action: Reconstruct a textbook design to identify assumptions about similarities and differences of users. Force the design to explore extremes such as missing or defective mouse and evaluate the robustness of the design.

Action: Find an example of a product that illustrates universal design. How were its design alternatives derived and evaluated?

revving Up our computational Thinking on accessibility

POSH (Plain Old semantic HTML) and headings

POSH focuses our attention on common structural elements of HTML that add
meaning to our content with Headings and Lists as regular features. An enormous
number of web pages are free of headings or careless about their use. The
general rule is to outline the page in a logical manner: h1, H2, h3,…,H6, in
hierarchical ordering.
why is this so important for accessibility?

  1. headings. support page abstraction. reaching a page, whether first or return
    visit, I, and many other screen reader users, take a ‘heading tour’. Using our ‘h’ key repeatedly to visit headings, gives a rapid-fire reading of the parts of the page and an
    introduction to the terminology of the web site and page content. bingo! a good
    heading tour and my brain has a mental map and a quick plan for achieving my
    purpose for being there. No headings and, argh, I have to learn the same thing
    through links and weaker structures like lists. At worst I need to tab along
    the focus trail of HTML elements, usually a top-bottom, left-right ordering.

  2. Page abstraction enables better than linear search if I know roughly what I
    want. for example, looking for colloquium talks on a Cs website is likely to
    succeed by heading toward News and Events, whatever. with likely a few dozen
    page parts, linear search is time and energy consuming, although often leading
    to interesting distractions.

  3. Page abstraction encourages thinking about cohesion of parts, where to
    modularize, how to describe parts, and consistent naming. This becomes
    especially important for page maintainers, and eventually page readers, when
    new links are added. Just like software design, cohesion and coupling plus
    naming help control maintenance. An example of where this goes wrong is the
    “bureaucratic guano” on many government web pages, where every administrator
    and program manager needs to leave their own links but nobody has the page
    structure as their main goal.

  4. while it’s not easy to prove, but plausible, SEO (search engine optimizers)
    claim headings play a role in page rankings. This appeals to good sense that
    words used in headings are more important so worth higher weights for search
    accuracy. It might also mean pages are better designed, but this is just
    conventional wisdom of users with accessibility needs.

so, we have abstraction, search, design quality, and metrics applied to the
simple old semantic HTML Heading construct.

Now, this rudimentary semantic use of Headings is the current best practice, supplementing the deprecated Accs Tags that all keyboard users can exploit to reach standard page locations, like search box and navigation. Rather, headings refine and define better supplements for access tags. Going further, the ARIA brand of HTML encourages so-called ‘landmarks’ which can also be toured and help structure complex page patterns such as search results. The NVDA screen reader reports landmarks as illustrated on AccessibleTwitter and Bookshare. Sites without even Headings appear quaint and deliberately unhelpful.

The Readable conference program Problem

I recently attended a conference of 3.5 days with about 7 tracks per session.
The document came as a PDF without markup, apparently derived from a WORD
document with intended use in printed form. Oh, yeah, it was 10MB download with
decorations and all conference info.

I was helpless to read this myself. yes, I could use the screen reader but
could not mentally keep in mind all the times and tracks and speakers and
topics. I couldn’t read like down Tracks or across sessions nor mark talks to
attend. Bummer, I needed a sighted reader and then still had to keep the
program in mind while attending.

A HTML version of the preliminary program was decidedly more usable. Hey, this is what hypertext is all about! Links from talks to tracks and sessions and vice versa, programs by days or half-days subdivided on pages, real HTML data tables with headers that can be interpreted by screen reader, albeit still slowly and painfully.
that’s better, but would be unpopular with sighted people who
wanted a stapled or folded printout.

OK, we know this is highly structured data so how about a database? This would
permit, with some SQL and HTML, wrapping, generation of multiple formats, e.g.
emphasizing tracks or sessions or topics,… But this wouldn’t likely distill
into a suitable printable document. Actually, MS WORD is programmable, so the
original route is still possible but not often considered. Of course, it’s often more difficult to enter data into forms for a database, but isn’t that what student helpers are for? Ditto the HTML generation from the database.

The best compromise might be using appropriate Header styles in WORD and
use an available DAISY export so the program in XML could be navigated in our
book readers.

This example points the persistent problem that PDF, which prints well and
downloads intact, is a bugger when it loses its logical structure. Sighted
readers see that structure, print disable people get just loads of text. This
is especially ironic when the parts originally had semantic markup lost in
translation to PDF, as occurs with NSF proposals.

so, here I’m trying to point out a number of abstraction problems, very
mundane, but amenable to an accommodation by abstracting to a database type of
model or fully exploiting markup and accessible format in WORD. Are there other
approaches? Does characterizing this problem in terms of trade-offs among abstractions and loss of structural information motivate computer scientists to approach their conference responsibilities different?

More generally, accessibility strongly suggests that HTML be the dominant document type on the web, with PDF, TXT, WORD, etc. As supplementary. Adobe and free lance consultants work very hard to explain how PDF may be made accessible, but that’s just not happening, nor will this replace probably millions of moldering PDFs. Besides negligent accessibility, forcing a user out of a browser into a separate application causes resources allocated and inevitable security updates.

Design by Progressive Enhancement&lt

‘Graceful degradation’ didn’t work for web design, e.g. when a browser has javascript turned off, or an older browser is used, or a browser uses a small screen. Web designers recast their process to focus on content first, then styles, and finally interactive scripting. There’s a lot more in the practitioner literature that might well be amenable to computational thinking, e.g. tools that support and ease the enhancement process as well as the reverse accommodation of browser limitations. Perhaps tests could be generated to work in conjunction with the free screen reader, to encourage web developers to place themselves in the user context, especially requiring accessibility.

So, here’s a challenge for those interested in Science of Design, design patterns, and test methods with many case studies on the web, discussed in blogs and podcasts.

Touring CS websites by screen reader
— download MP3

Are you up for something different? Download

MP3 illustration of POSH Computer Science websites 45 minutes, 20 MB
. This is me talking abot what I find at the following locations, pointing out good and bad accessibility features. You should get a feeling of life using a screen reader and how I stumble around websites. And, please, let me interject that we’re all learning to make websites better, including my own past and present.

Note: I meant POSH=”Plain old semantic HTML” but sometimes said “Plain old simple HTML”. Sorry about the ringing alarm. Experimental metadata: Windows XP, Firefox, NVDA RC 2009, ATT Mike and Neo speech Kate, PlexTalk Pocket recorder.

Web Sites Visited on CSE screen reader tour

  1. U. Texas Austin

    Firm accessibility statement;
    graphic description?;
    headings cover all links?;
    good to have RSS;
    pretty POSH

  2. U. Washington

    No headings, uses layout tables (deprecated);
    good use of ALT describing graphics;
    not POSH

  3. U. Arizona

    all headings at H1, huh?;
    non informative links ‘learn more’;
    not POSH

  4. CS at

    no headings;
    non informative graphics and links;
    unidentified calendar trap;
    definitely not POSH

  5. Computational Thinking Center at CMU

    no headings;
    strange term probes:;
    non informative links PPT, PDF;
    poor POSH

  6. CRA Computing Research Association


    no headings;
    interminable links unstructured list;
    not so POSH

  7. and DL portal

    irregular headings on main page;
    no headings on DL portal;
    noninformative links to volumes;
    hard to find category section;
    poo POSH

  8. Computer Educators Oral History Project CHEOP

    straightforward headings;
    don’t need “looks good” if standard;
    good links;
    POSH enough

  9. NCWIT National Center Women Information Technology

    doesn’t conform to accessibility statement;
    graphics ALT are not informative;
    link ‘more’ lacks context;
    headings irregular;
    do over for POSH

So, what to do with these POSH reports?

Clearly, some sites could use some more work to become world class role models for accessibility. At first glance, my reports and those that would be compiled from validator’s like WebAim WAVE indicate that some HTML tweaking would yield improvements. Maybe, but most websites are under the control of IT or new media or other departments, or maybe outsourced to vendors. Changes would then require negotiation. Another complication is that once a renovation starts, it is all too easy to use the change for a much more extensive overhaul. Sometimes, fixes might not be so easy, as often is indicated by the processes of progressive enhancement. This is classical maintenance process management, as in software engineering.

However, hey, why not use this as a design contest? Which student group can produce a mockup shadow website that is attractive and also meets the WCAG, validator, and even the SLGer tests?

Just saying, here’s a great challenge for CSE to (1) learn more about accessibility and web standards, (2) make websites role models for other institutions, and (3) improve service for prospective students, parents, and benefactors.

conclusion: A Call To Action

To the proclamation, let us informally add

  • whereas society, including the Cs field itself, requires that all information, computer-based technology be available to all persons with disabilities,

  • whereas computer science is the closest academic field to the needs and opportunities for universal accessibility,

  • Disabled individuals are particularly under-represented in computing fields, in disparate proportion to the importance of disability in the economic and social well-being of the nation

  • therefore
  • computer science educators will adapt their curricula to produce students with professional awareness of the range of human abilities, the resources for responding to needs of persons with disabilities

  • computer science education will be open and welcoming to all persons with disabilities both helping the person to reach their own employment potential and opportunity to contribute to society and (2) inform educators and other students about their abilities, needs, domain knowledge,

See next post for Webliography

Comments, Corrections, Complaint?

Please add your comments below and I’ll moderate asap.
Yes, I know there are lots of typos but I’m tired of listening to myself, will proof-listen again later.
Longer comments to Join in the Twitter discussion of #accessibility by following me as slger123.

Thanks for listening.

Story: A Screen Reader Salvages a Legacy System

This post tells a story of how the NVDA Screen Reader helped a person with vision loss solve a former employment situation puzzle. Way to go, grandpa Dave, and thanks for permission to reprint from the NVDA discussion list on

Grandpa Dave’s Story

From: Dave Mack
To: nvda

Date: Oct 29

Subj: [nvda] Just sharing a feel good experience with NVDA
Hi, again, folks, Grandpa Dave in California, here –
I have hesitated sharing a recent experience I had using NVDA because I know this list is primarily for purposes of reporting bugs and fixes using NVDA. However, since this is the first community of blind and visually-impaired users I have joined since losing my ability to read the screen visually, I have decided to go ahead and share this feel-good experience where my vision loss has turned out to be an asset for a group of sighted folks. A while ago, a list member shared their experience helping a sighted friend whose monitor had gone blank by fixing the problem using NVDA on a pen drive so I decided to go ahead and share this experience as well – though not involving a pen drive but most definitely involving my NVDA screen reader.

Well, I just had a great experience using NVDA to help some sighted folks where I used to work and where I retired from ten years ago. I got a phone call from the current president of the local Federal labor union I belonged to and she explained that the new union treasurer was having a problem updating their large membership database with changes in the union’s payroll deductions that they needed to forward to the agency’s central payroll for processing. She said they had been working off-and-on for almost three weeks and no one could resolve the problem even though they were following the payroll change instructions I had left on the computer back in the days I had written their database as an amateur programmer. I was shocked to hear they were still using my membership database program as I had written it almost three decades ago! I told her I didn’t remember much abouthe dBase programming language but I asked her to email me the original instructions I had left on the computer and a copy of the input commands they were keying into the computer. I told her I was now visually impaired, but was learning to use the NVDA screen reader and would do my best to help. She said even several of the Agency’s programmers were
stumped but they did not know the dBase program language.

A half hour later I received two email attachments, one containing my thirty-year-old instructions and another containing the commands they were manually keying into their old pre-Windows computer, still being used by the union’s treasurer once-a-month for payroll deduction purposes. Well, as soon as I brought up the two documents and listened to a comparison using NVDA, I heard a difference between what they were entering and what my instructions had been. They were leaving out some “dots, or periods, which should be included in their input strings into the computer. I called the Union’s current president back within minutes of receiving the email. Everyone was shocked and said they could not see the dots or periods. I told them to remember they were probably still using a thirty-year-old low resolution computer monitor and old dot-matrix printer which were making the dots or periods appear to be part of letters they were situated between.

Later in the day I got a called back from the Local President saying I had definitely identified the problem and thanking me profusely and said she was telling everyone I had found the cause of the problem by listening to errors non of the sighted folks had been able to see . And, yes, they were going to upgrade their computer system now after all these many years. (laughing) I told her to remember this experience the next time anyone makes a wisecrack about folks with so-called impairments. She said it was a good lesson for all. Then she admitted that the reason they had not contacted me sooner was that they had heard through the grapevine that I was now legally blind and everyone assumed I would not be able to be of assistance. What a mistake and waste of time that ignorant assumption was, she confessed.

Well, that’s my feel good story, but, then, it’s probably old hat for many of you. I just wanted to share it as it was my first experience teaching a little lesson to sighted people in my
own small way. with the help of NVDA. –

Grandpa Dave in California

Moral of the Story: Screen Readers Augment our Senses in Many Ways = Invitation to Comment

Do you have a story where a screen reader or similar audio technology solved problems where normal use of senses failed? Please post a comment.

And isn’t it great that us older folks have such a productive and usable way of overcoming our vision losses? Thanks, NVDA projectn developers, sponsors, and testers.

The Pleasures of Audio Reading

This post expands my response to an interesting
Reading in the Dark Survey
Sighted readers will learn from the survey how established services provide reading materials to be used with assistive technology. Vision Losers may find new tools and encouragement to maintain and expand their reading lives.

Survey Requesting feedback: thoughts on audio formats and personal reading styles?

Kestrell says:

… hoping to write an article on audio books and multiple literacies but, as far as I can find, there are no available sources discussing the topic of audio formats and literacy, let alone how such literacy may reflect a wide spectrum of reading preferences and personal styles.

Thus, I am hoping some of my friends who read audio format books will be willing to leave some comments here about their own reading of audio format books/podcasts. Feel free to post this in other places.

Some general questions:
Do you read audio format books?
Do you prefer special libraries or do you read more free or commercially-available audiobooks and podcasts?
What is your favorite device or devices for reading?
Do elements such as DRM and other security measures which dictate what device you can read on influence your choices?
Do you agree with David Rose–one of the few people who has written academic writings about audio formats and reading–that reading through listening is slower than reading visually?
How many audiobooks do you read in a week (this can include podcasts, etc.)?
Do you ever get the feeling form others that audiobooks and audio formats are still considered to be not quote real unquote books, or that reading audiobooks requires less literacy skills (in other words, do you feel there is a cultural prejudice toward reading audiobooks)?
anything else you want to say about reading through listening?

This Vision Loser’s Response

Audio formats and services

I read almost exclusively using TTS on mobile readers from DAISY format books and newspapers. I find synthetic speech more flexible and faster than narrated content. For me, human narrators are more distracting than listening “through” the voice into the author’s words. I also liberally bookmark points I can re-read by sentence, paragraph, or page.

Bookshare is my primary source of books and newspapers downloaded onto the Levelstar Icon PDA. I usually transfer books to the APH BookPort and PlexTalk Pocket for reading in bed and on the go, respectively. My news streams are expanded with dozens of RSS feeds of blogs, articles, and podcasts from news, magazines, organizations, and individuals. Recently, twitter supplies a steady stream of links to worthy and interesting articles, followed on either the Icon or browser in Accessible Twitter.

I never seem to follow through with NLS or Audible or other services with DRM and setups. I find the Bookshare DRM just right and respect it fully but could not imagine paying for an electronic book I could not pass on to others. I’m about to try Overdrive at my local library. I’ve been lax about signing up for NLS now that Icon provides download. No excuses, I should diversify my services.

I try to repay authors of shared scanned books with referrals to book clubs and friends, e.g. I’ve several now hooked on Winspear’s “Macy Dobbs” series.

Reading quality and quantity

I belong to two book clubs that meet monthly as well as taking lifelong learning classes at the community college. Book club members know that my ready book supply is limited and take this into consideration when selecting books. My compact with myself is that I buy selected books not on Bookshare and scan and submit them. I hope to catch up submitted already scanned books soon. Conversely, I can often preview a book before selection and make recommendations on topics that interest book club members, e.g. Jill B. Taylor’s “Stroke of Insight”. I often annoy an avid reader friend by finishing a book while she is #40 on the local library waiting list. This happens with NYTimes best sellers and Diane Rehm show reader reviews. No, I don’t feel askance looks from other readers but rather the normal responses to an aging female geek.

At any one time, I usually have a dozen books “open” on the Bookport and PlexTalk as I switch among club and course selections, fiction favorites, and heavy nonfiction. However, I usually finish 2 or 3 books a week, reading at night, with another 120 RSS feeds incoming dozens of articles daily. I believe my reading productivity is higher than before vision loss due to expedient technology delivery of content and my natural habits of skimming and reading nonlinearly. Indeed, reading by listening forces focus and concentration in a good sense and, even better, performed in just about any physical setting, posture, or other ambient conditions.
Overall, I am exquisitely satisfied with my reading by listening mode. I have more content, better affordable devices, and breadth of stimulating interests to forge a suitable reading life.

Reading wishes and wants

I do have several frustrations. (1) Books with tables of data lose me as a jumble of numbers unless the text describes the data profile. (2) While I have great access through Bookshare and NFB NewsLine to national newspapers and magazines, my state and local papers use content management systems difficult to read either online or by RSS feed. (3) Google Book Search refuses to equalize my research with others by displaying only images of pages.

For demographics, I’m 66 years old, lost last sliver of reading vision three years ago from myopic degeneration, and was only struggling a few months before settling into Bookshare. As a technologist first exposed to DECTalk in the 1980s, I appreciate TTS as a fantastically under-rated technology. However, others of my generation often respond with what I’ve dubbed “Synthetic voice shock” that scares them away from my reading devices and sources. I’d like to see more gentle introductions from AT vendors and the few rehab services available to retired vision losers. Finally, it would be great to totally obliterate the line between assistive and mainstream technology to expand the market and also enable sighted people to read as well as some of us.

References and Notes on Audio Reading

  1. Relevant previous posts from ‘As Your World Changes’

  2. Audio reading technology
    • LevelStar Icon Mobile Manager and Docking Station is my day-long companion for mail, RSS, twitter, and news. The link to Bookshare Newsstand and book collection sold me on the device. Bookshare can be searched by title, author, or recent additions, and I even hit my 100 limit last month. Newspapers download rapidly and are easy to read — get them before the industry collapses. The book shelf manager and reader are adequate but I prefer to upload in batches to the PC then download to Bookport. The Icon is my main RSS client for over 100 feeds of news, blogs, and podcasts.
    • Sadly, the American Printing House for the Blind is no longer able to maintain or distribute the Bookport due to manufacturing problems. However, some units are still around at blindness used equipment sites. The voice is snappy and it’s easy to browse through pages and leave simple bookmarks. Here is where I have probably dozens of DAISY files started, like a huge pile of books opened and waiting for my return. My biggest problem with this little black box is that my pet dog snags the ear buds as his toy. No other reader comes close to the comfort and joy of the Bookport, which awaits a successor at APH.
    • Demo of PlexTalk Pocket provides a TTS reader in a very small and comfortable package. However, this new product breaks on some books and is awkward managing files. The recording capabilities are awesome, providing great recording directly from a computer and voice memos. With a large SD card, this is also a good accessible MP3 player for podcasts.
  3. Article supporting Writers’ Guild in Kindle dispute illustrates the issues of copyright and author compensation. I personally would favor a micro payment system rather than my personal referral activism. However, in a society where a visually impaired person can be denied health insurance, where 70% unemployment is common, where web site accessibility is routinely ignored, it’s wonderful that readers have opportunities for both pleasure and keeping up with fellow book worshipers.
  4. Setting up podcast, blog, and news feeds is tricky sometimes and tedious. Here is my my OPML feeds for importing into other RSS readers or editing in a NotePad.

  5. Here’s another technology question. Could DAISY standard format, well supported in our assistive reading devices become a format suitable for distributing the promised data from
    Here is a interview with DAISY founder George Kerscher on XML progress.

  6. Another physiological question is what’s going on in my brain as I switch primarily to audio mode? Are there exercises that can make that switch over more comfortable and accelerated than just picking up devices and training oneself? I’m delving into Blogs on ‘brain plasticity’
  7. (WARNING PDF) Listening to the Literacy Events of a Blind Reader – an essay by Mark Willis asks whether audio reading can cope with the critical thinking required in a complex and sometimes self-contradictory doctrine like Thomas Kuhn’s “Scientific Revolutions”. This would be a great experiment for psychology or self. Let’s also not forget the resources of Book Club Reading Lists to help determine what we missed in a reading or may have gained through audio mental processing.

Audio reading of this blog post

The ‘Talking ATM’ Is My Invisible Dream Machine.

A twitter message alerted me to a milestone I surely didn’t care about a decade ago, but really appreciate now. This post explains how easy it is to use a Talking ATM. People with vision impairment might want to try out this hard-won disability service if not already users. Sighted people can gain insight and direct experience with the convenience of talking interfaces. But, hey, why shouldn’t every device talk like this?

The Milestone: 10 years of the Talking ATM

The history is well told in commemorative articles published in 2003. References below.
Pressure from blind individuals and advocacy organizations circa 2000, with the help of structured negotiators (lawyers), led banks to design and roll out Automated Teller Machines equipped with speech. Recorded audio wav files were replaced by synthetic voices that read instructions and lead the customer through a menu of transactions.

first, I’ll relate my experience and then extrapolate on broader technology and social issues.

My Talking ATM Story

As my vision slid away in 2006, I could no longer translate the wobbly lines and button labels on my ATM screen to comfortably perform routine cash withdrawals. Indeed, on one fateful Sunday afternoon I inserted my card, then noticed an unfamiliar pattern on the screen. Calling in my teenage driver, we noticed several handwritten notes indicating lost cards in the past hour. I had just enough cash in hand to make it through a Monday trip out of town, and immediately called the bank upon return Tuesday. A series of frustrating interactions ensued, like my ATM card could only be replaced by my coming in to enter a new PIN. But how was I to get to the office without a driver or cab fare when I was out of cash?

This seemed like a good time to familiarize myself with audio ATM functions, to lessen risk of having another card gobbled by a temporarily malfunctioning station. With lingering bad feelings about the branch of the Sunday fiasco, I recalled better experience at a different office after my six month saga on reversal of mortgage over-payment. Lesson learned—never put an extra 0 in a $ box and always listen or look carefully at verification totals.

I strolled into the quiet office and asked customer service to explain the audio teller operations. The pleasant service person whipped out a big headset and we headed out to the ATM station. Oddly, most stations are located in office alcoves or external walls. This one was outside the drive-by window to be shared by pedestrian and automotive customers.
ok, waiting for traffic to clear, we went through a good intro. I wasn’t as familiar with audio interfaces at that point in my Vision Loser life but I eventually worked up courage in the next few weeks to tackle the ATM myself with my own ear buds.

Well, 3 years later, I’m a pro and can get my fast cash in under a minute, unless my ear buds get tangled or I drop my cane. First problem is figuring out how to get in line, like standing behind a truck’s exhaust or walking out before a monster SUV. Usually I hang back, looking into the often dry bed of Granite Creek until the line is empty. Next step is to stand my white cane in a corner of the ATM column, feel around for the audio opening hidden in a ridged region, wait for the voice to indicate the station is live, shove in my card, and ready to roll. The voice, probably Eloquence, usually drones into a “Please listen carefully as the instructions have changed…”. Shut up, this will only take a minute and I don’t need to change volume or speed. Enter, type PIN, retype PIN if commonly hit a wrong key, and on to Main Menu (thinking of ACB Radio’s Technology jingle). 6 button down to Fast Cash, on by 20,…100,…, confirm and click, chug comes cash, receipt, and release of card. Gather up receipt, card, cane, and — important — remove ear buds, and I’m on my way.

Occasionally things go wrong. Recently, my receipt didn’t appear and customer service rep and I did a balance request and out spat two receipts, both mine. Kind of nerve wracking as somebody else could have intervened and learned of my great wealth. The customer service rep vowed to call in maintenance on the ATM, but I bet a few more receipts got wadded up that afternoon. Electro-mechanical failures often foil sophisticated software.

Another time, I finished my Fast Cash and waited for card release only to be given a “have we got a good deal for you” long-winded offer of a credit card. I wasn’t sure how to cancel out and still get my ATM card back. since I lecture family on the evils of the credit card, I was fuming at a double punishment. Complaining to the customer service rep inside, I learned sighted people were also not thrilled at this extra imposed step.

Now, to reveal the identity of the ATM, it’s Chase Bank, formerly Bank One, on Gurley Street near the historic Whisky Row of downtown Prescott AZ.
Although I haven’t performed any complex ATM interactions, it’s fair to say I’m a satisfied user and would not hesitate to recommend this to anyone with good hearing unafraid to perform transactions with engines and radios and cell conversations roaring all around. An indoor ATM would be a good step someday but, hey, this is a conservative town, not particularly pedestrian friendly. Mainly I appreciate that I can get my cash as part of a routine just like other people and I don’t even use up extra gasoline waiting in line.

Broader Issues of Talking Transactions

Does the ATM voice induce Synthetic Voice Shock?

I coined the term in Synthetic Voice Shock Reverberates Across the Divides to explain responses I heard about voices offered in assistive technologies to overcome vision loss. Personally, I hated Eloquence when I first heard it demonstrated but I rapidly grew to love my Precise Paul and friends as I realized that (1) the voices really were understandable and (2) I didn’t have any choice if I wanted to keep reading. I now wonder how people like me, slowly losing vision while off the rehab grid, learn about Talking ATM and related services. It hurts to think people give up that one step of independence from not knowing whom to ask or even if such services exist. And supposing someone does step up to an ATM ready to listen, are they tuned in to hearing synthetic speech sufficiently to make an informed choice whether the Talking Teller is an appropriate service for them? Did the Disability Rights movement fight through a decade only to have a generation of drop-outs from oldsters with difficulty adjusting to vision loss, a panoply of technology, and no-longer-young nerves?

Are Audio E-voting and Talking ATM’s Close Cousins?

I have described my experiences in 2008 voting without viewing. The voting device is a keypad like offered by the ATM I use while the voice is a combination of human narrated candidate and race announcements interspersed with synthetic speech instructions and navigation. I found this mode of voting satisfying, compared with having someone read the ballot to and mark for me. However, even my well-attuned ears and fingers seemed to get in trouble with speech speedup and slowdown, which I blame on poor interaction design. Note that many ATM and voting systems have origins in the NCR and Die bold product lines so usability and accessibility research lessons should carry over.

Why aren’t all check-out services as easy as banking?

I buy something at a store and then have a hassle at check-out finding a box on a screen or buttons I cannot see for typing in a debit card PIN. I’ve never understood why I can give a credit card number over a phone without signing but must sign if I swipe it on checkout. And giving a PIN to a family member or stranger isn’t good practice. Sometimes check-out can get really nasty as when a checker wouldn’t let me through because my debit card swiper was only age 20 – it’s my debit card, my groceries, my wine, and I’ll show you a social security age ID card. Geez, now we’re nervous every time we check out a Safeway since Aunt Susan has a short fuse after a tiring shopping session. If only the Point of sale thing talked and had tactile forms of PIN entry. I ask Safeway when accessible check-out will be possible and let them know the store has a visually impaired regular shopper.

Is audio interaction a literacy issue?

We are actually on track to a world where everything talks: microwave ovens, cards, color tellers, security systems, thermostats, etc. Text to speech is a commodity additional feature to onboard processors in digital devices. Indeed, we can hope this feature slips out of the aura of assistive technology into the main stream to enlarge the range of products and capabilities available to everybody. Why shouldn’t manuals be built in to the device, especially since the device is soon after purchase separated forever from its printed material? Why shouldn’t diagnostics be integrated with speech rather than provided on bitty screens hard to read for everybody? How about making screens the add-on features with audio as the main output channel?

Let’s generalize here and suggest the need for a simple training module to help people with recent vision loss get accustomed to working keypads accompanied by synthetic speech. Who could offer such training? I asked around at the CSUN exhibits and haven’t yet found an answer. There are multiple stages here, like producing a book and then distributing to end users via libraries or rehab services. My experience is that social services are hard enough to find and often more available to people who have already suspended independent activities.

The outreach problem is real. Finally, I’d like to express my appreciation to the activists, educators, and lawyers who convinced banking organizations and continue to work on retailers to make my “money moments” conventional and un stressful. The “talking ATM” shows what is possible not only for business but also for the broader opportunities sketched out above. Let all devices talk, I wish.

References on Talking ATMs

  1. Background and excellent overview compiled by Disability Civil Rights Attorney Lainey Feingold>

  2. Blind Cool Tech demos of talking devices

  3. Talking ATM on wikipedia

  4. Swedish choice of Acapella voices for ATMs for more modern sounding speech. Demos available on website.

  5. Chase bank and Access Technologies ATM collaboration

  6. (PDF) 2003 case study of Talking ATM upgrades
    . Bundled features with speech included better encryption and streamlined statement viewing.

  7. The electronic ‘curb cuts’ effect
    by Steve Jacobs

  8. Portfolio of talking information
    based on ATT technology

  9. ‘What to do when you meet a sighted person’ (parody)

Accessible Voting Worked for Me, I Think

It was a fine warm fall day for voting with an overhang of smoke from controlled burns in nearby forests.

After an earlier trial demo and a mixed experience in the September primary, I felt geared up for the mechanics of voting independently in this penultimate election of my lifetime. Ending a year of political junkiness and some serious conversations with “Jack the Dog Walker” on state ballot initiatives, I knew my choices.

Then I spoke those words that so shake up the poll workers at the Yavapai County early voting office — “I need Audio voting”. With white cane for identity, I waited patiently while the exceptional procedures sprung into action. Given head phones and number key pad and a chair, the poll worker returned my ID and inserted the card to rev up the premiere Election Systems workstation. Ominously, the audio did not work. Reset. Whoops, audio but no keypad response. Move over one workstation and I was finally in business with instructions coming through the head phones and my brain fighting to cancel out the surrounding noise of the other voters in the office lobby alcove.

I was truly awe struck at the announcement of the office Presidential Electors, forgetting momentarily the key to press to actually cast this important vote. Then I got into the rhythm – 6 for next, 5 to vote, 4 for back. This ballot’s interaction was easier than the primary which required more confirmation and interaction to move among races. Each race and contestants or YES/NO answers were clearly announced. However, a 7 to cancel a vote also slowed the voice, in contrast to the disconcerting speech speedup I experienced in September. This round I understood the sample ballot and could predict how far to go. Reading the ballot for confirmation, a 9 key pressed, the clatter of the printer and I was done. I thanked the poll worker for competantly handling this exceptional Vision Loser.

Whether my vote is actually counted accurately is a whole different matter, something the U.S. must fix if it cares for democracy as much as for marketplace ideology. Exhilarated from my independent action, I trekked on down town for lunch near the famous Prescott Territorial Court House. Now, about those accessible street crossing signals — well, “adopt an intersection” is next of the agenda of this Vision Loser Voter.

Uh, oh, just when I thought I was safe from campaigning, comes a warning about Monday night scenic opportunity using the Court House Plaza prop. Sigh…

Previous Posts:

voting Without Viewing? Yes, but It’s so Slow!

Taking advantage of accessible voting

I decided that since the Help America vote act had encumbered quite a view million $$$ for fancy electronic equipment with accessible extension, I would take my chances to vote as independently as possible this round. Here’s the story of early voting in an Arizona primary. Vision Losers might use this experience to evaluate their own voting options. Other citizens and technologists will learn how electronic voting works for one tech savvy Vision Loser.

Against a background of the sorry state of American voting processes

First, let me say that, as an informed computer scientist, I do not for one nanosecond believe the odds are very high that my voting precinct actually got a correct tally of votes, including mine. I voted on a setup from the infamous Diebold, now renamed to Premiere Election, Systems. There’s just no way any independent assurance organization can reasonably test a black box version of software and hardware, let alone all the combinations of diverse local ballot designs multiple configurations of the setup, and inevitable versions of evolving software. And that’s not worrying about human error by voting board personnel, malicious people, or silly policies like Ohio’s sleep-over procedures. Business ideology has trumped common sense democracy for Americans, unlike Australia and other countries that adopt an open approach.

Here is how I voted in September 2008

A preview and trial at my local voting board

Nevertheless, I wanted my independence and to force myself through the best possible preparation. A few months ago, I paid a visit to the Yavapai county recorder’s Office for a personal trial on a mock ballot so I would be familiar with the equipment. I was reasonably impressed with the audio system, very enthusiastic about the personnel who welcomed the opportunity to try out their audio setup, and comfortable about working the equipment rather than asking someone to read and mark my ballot. I knew the actual voting would be slow and that I needed to do my homework on candidates and races so I could concentrate on the voting act itself.

Getting from sample to real ballot

I was pleased to find a nice little primary coming up in September with early voting several weeks ahead. One primary race is especially important in Arizona district No. 1, to replace rep. Rick Renzi who was indicted on 35 counts of fraud and other bad stuff. With a senator as presumptive Presidential candidate and a 40% voting record, Poor representation of this region for months especially annoys me as economic and social policies have consequences I had not foreseen as I grapple with my own rehabilitation and my family’s future. Both major parties had a good slate of 4 or 5 candidates with experience relative to a highly diverse region of Indian reservations, small cities, and lots of open space.

I made my choice of party and candidate for Congress and began to look for the other races of interest. There were few contests so I assumed the ballot would be a piece of cake. Actually, I had some trouble figuring out the full set of races. I used VoteSmart, the AZ clean elections site, the county listing of candidates, Arizona Republic and Daily Courier candidate blurbs, even Wikipedia. A sample ballot arrived just before my trip to the polling place, but my reader and I were confused about a long list of write-in lines.

The nitty-gritty mechanics of voting

So, as much prepared as I could be, I entered the county office lobby and asked to vote using the audio system. I think I was the first to request this as a flurry of calls upstairs quickly produced an access card to a screen protected by side blinders and the headset and keypad I had used in my previous experiment. Oh, and most important was a chair.

To summarize the audio voting process, you click the appropriate numbered buttons to advance through races, making and confirming choices while hearing the race titles, constraints and candidate names through headphones. There is nothing visual happening. I listened to the instructions and tried to adjust the volume to match both a synthetic voice announcement of races and human recorded reading of candidate names, using female voices. Occasionally, other customers and voters in a noisy lobby overcame the headset ear pads. The input device was a simple phone keypad with larger sized keys, comfortably held in my lap.

Uh, oh, am I in a loop?

I moved quickly through my choice for the congressional and legislative races. Then things became unfamiliar with more races for county offices and state supreme Court seats all with only a write-in option. Not having any choice, I kept hitting 6 to next race, 9 to confirm my under-voting for continuation to next race. At one point , my attention drifted and I seemed to be in a loop of hitting next without actually having races announce, maybe between district, county, and state races.

After a while I got bored and tried an actual write-in, “gump” sounded good at the moment, and was easy to type although tedious to spell and confirm. Then I got serious and canceled out of write-in. In successive races for supreme court seats, the synthetic voice seemed to be getting faster, and very high pitched. Now, I can listen to really fast voices on my reading appliances. But by the end of what seemed like 50 races, I couldn’t understand the voice. Nor could I remember how to get the main menu or adjust voices. I was stuck, hoping the end would come before I fell asleep at the keypad. Finally, the printer attached to the side clattered and the voice trailed off into oblivion. My nearly trance state lifted and I called for the attendant to complete the session.

Had I actually accomplished my voting goals? I think so as the early races that mattered seemed to be OK, but since I lost control in the middle and was pretty confused toward the end, I can only hope nothing invalidated those early race clicks. This whole process took about 30 minutes, long enough I had to wake up my driver to leave . I reported my troubles to the poll assistants but left unsure we understood the cause of my loop and voice speed-up. My guess is that the speed up started when I hit the relevant key during my write-in fumbling and the modes got confused as I skipped through further write-in choices.

Yes, I will vote this way again, but can others?

I had hoped this experience could be recommended to others, but, alas, I fear those less adept at computer interactions might not find the humor in the loop and could freak out with babbling voices. I will vote again this way in November but next time pay lots more attention to the exit, speed, and volume options. Everybody has a limit to attention and energy to put into this voting exercise. Half an hour for a handful of races and an enormous number of later vacuous choices is a dubious way of getting the job done.

Further concerns about time commitments, voice shocks, and practice

Another lesson for next time is to seriously invest more effort into learning about picking candidates. I hope to find more help from the SunSounds state audio assistance radio system or locate better candidate description materials. For example, the AZ Clean elections brochure that arrived in the mail was organized by race, then district, then party, then candidate which was beyond my patience to scan or anybody else’s willingness to read to me only the district No. 1 choices on pages 4, 39, and so on. Perhaps voting early beats preparation of more candidate comparisons and recommendations from organizations like league of women voters. Perhaps my “domain knowledge” of elections and state offices made my Google and dog pile searches susceptible to donate Now organizations. Certainly, I have not yet found a good source of advice directed to people like me voting blind for the first time. What I really want is a web page duplicating the ballot, divided into levels of government, with attached very short bios and links to longer histories, position statements, and reputable sources of candidate comparisons. The HTML and hypertext structuring are important as PDF is hard to use by audio and often loses the content structure when converted to a text stream. It might also be nice to have a candidate-a-day RSS feed to make the information more digestible in smaller chunks.

I would recommend to others considering using an audio or visually assisted voting workstation to request a trial. Yes, that means taking up time from election board workers, but I found them helpful, friendly, and interested in feedback. Anybody who can handle a bank ATM via audio should be ready to try out the system. However, someone with hearing problems might not be able to adjust the equipment to their needs in a noisy environment. The long-time blind who readily adapt to new devices should appreciate the new-found independence. However, new Vision Losers are faced with lot of work to master both the information gathering and the audio assisted voting process.

My biggest warning is the time commitment to survive the rigors of a long ballot. Had I wanted to actually write in a lot of names, I would have been there until closing time. With so few voters like me, there seems little data to accumulate experience for a warning label, but this is a practical constraint. Voters need to know how much time to ask of their drivers. With more voters using the assistive workstation, there would be a long wait just to get your chance. I suppose I could have asked for assistance during my loops and voice accelerations, but I just wanted to get out of write-in hell. Far more instructional time could be required for first time users of the audio assistance, especially if the equipment balks at start up or printing. And, what happens if a voter gives up during a voting session or nearly goes into a trance, as happened to me? Of course, there are other disabilities more complex than vision, such as strength and mobility, for using different input devices.

Getting a bit more technical, in my earlier visit for a trial, we discussed the need for a simulator for voter training using the audible equipment. I’d appreciate knowing if this exists anywhere. Since the user interaction is by phone keypad, a simulator with a mock ballot, as in my trial, could service widespread people if they knew the voting system designated for them. This could be done by phone or be a downloaded or web 2.0 app, something even I could write if I knew the rules. I could have called up and learned the instructions in the quiet of my home, memorized my way out when I hit a snag, and also reported problems back to the ballot designers and equipment vendors. Had I known about the write-in race survivor test, I’m not sure I would have followed through an actual vote. Those suffering from synthetic voice shock could at least determine whether they wanted to try to and were able to interpret the race announcements and instructions.

While the overall interaction of voting with only audio is really pretty easy, clearly the keypad needs a separate HELP key and RESTORE DEFAULTS action. Maybe these were available, but I was so deep into figuring out how to reach the end of the ballot, I was not interested in finding the escape button. More seriously, as a software testing expert and veteran system breaker, I really would like to replicate my experiences with the next-race loop and accelerating voice problems. It would be too irreverent and silly for a 65 year old lady to whiz around a county office building crowing that I’d broken the system, lookee, the computer is in a really bad state. No, I really appreciated the professionalism and help of the voting staff, but, well, I think I did break something and wish it could be reported and corrected.

So, why don’t I, a formerly reputable software professional try to do more? Well, first, with only two years of legal blindness I am still a learner in the assistive technology world. But more seriously, getting on my high horse, this whole system is an affront to U.S. citizenry. In my previous post, I equated electronic voting with two mixed metaphors, a “moon shot for democracy” and “extreme voting”, like a sporting challenge.

A rant on eVoting as a ‘bungled moon shot’

Just as sputnik shocked the U.S. into action for education in science, just as a catastrophe on the moon in 1969 would have undermined U.S. Self-confidence, just as the later space shuttles failures signaled a decline in space travel prowess, a definitive failure in our voting system undermines our feeling of living in a democracy. Yet, there is every sign that our voting system continues to be bungled, in the names of fancier technology and free enterprise. In my mind, the quest for a technological solution is a doable, long term project but only if committed to the technologists with expertise and freedom to question the safety of every step in the process, test each component down to its core against its specifications, simulate to exhaustion, and finally rely on combined community acceptance of safety to launch. In many ways, a rocket system is easier to design because it works with and against the continuous laws of physics, whereas a voting system works on discrete math and with and against the laws of human capabilities and differences. The security quality of human interactions with system is another dimension of complexity, but the bottom line is that voting systems cannot be black box. Discrete systems must be subjected to inductive reasoning applied to the code, hardware, user scenarios, with a huge dose of version control. Experimental software engineering has established the efficacy of software inspection, especially performed early and often using multiple viewpoints from varieties of expertise. Asking a weak testing regime to accept the assurance of vendors of proprietary systems, even against clear signs of fallibility, is like delivering a rocket to the pad, asking the astronauts to jump on, and not telling mission control how the rocket will behave.

My other metaphor of extreme voting is based on both user and developer experience. it is a lot to ask voting equipment vendors to produce extensions to service all ranges of human differences, including those considered disabilities. I was amazed the keypad and audio system worked as well as it did. Indeed, I might ask why spend all that money on fancy visual interfaces when audio will do, except for hearing impaired people. Users like me are forced into extreme and unknown conditions like long ballots read by unfamiliar voices marked by never before touched keypads. Please accept my invitation to use a bank ATM by audio to get a feeling for this experience. My current ATM transaction time is about a minute by knowing the exact sequence of key clicks, but at first I had little idea of the menu structures or the confirmation, cancellation, and selection instructions held in mind. Voting by audio is a similar experience.

To sum up, even though I had prepared myself well, I fell into a mess of write-in races which cause me to either mishandle the keypad input or to find an actual flaw in the system. In either case, the unpredictability of the long ballot and time required to work through it present, not insurmountable, but discomfiting conditions of voting independently. But I survived, and will continue to vote this way in the big election in November. I will also work hard in perhaps better information conditions to identify the races and candidates where I really care about my vote. I certainly do not want to leave wondering if I have voted for the right guy.

References for Voting without Vision

  1. Previous post on extreme Voting and a Moon Shot for Democracy
  2. California Secretary of State appraisal of voting system security and accessibility
  3. Concerns of computer scientists about electronic voting systems
  4. Audio version of this post