Posts Tagged ‘synthetic speech’

Vision What do Vision Losers want to know about technology?

April 5, 2010


Hey, I’ve been off on a tangent from writing about adjusting to vision loss rather on a rant about and praise for website accessibility. Also absorbing my blogging efforts was a 2nd run of Sharing and Learning on the Social Web, a lifelong learning course. My main personal tutors remain the wise people of #a11y on Twitter and their endless supply of illuminating blog posts and opinions. You can track my fluctuating interests and activities on Twitter @slger123.

To get back in action on this blog, I thought the WordPress stat search terms might translate into a sort of FAQ or update on what I’ve learned recently. Below are subtopics suggested by my interpretations of the terms people used to reach this blog. Often inaccurately, some people searching for tidbits on movies or books called ‘twilight’ might be surprised to read a review of the memories of an elder gent battling macular degeneration in the 1980s. Too bad, but there are also people searching for personal experience losing vision and on technology for overcoming limitations of vision loss. These folks are my target audience who might benefit from my ramblings and research. By the way, comments or guest posts would be very welcome..


This post focuses on technology while the next post addresses more personal and social issues.

Technology Theme: synthetic speech, screen readers software, eBooks, talking ATM

Terms used to reach this blog

  • stuff for blind people
  • writing for screen readers
  • artificial digital voice mp3
  • non-visual reading strategies
  • book readers for people with legal blind
  • technology for people with a print-disability
  • apps for reading text
  • what are the best synthetic voices
  • maryanne wolf brain’s plasticity
  • reading on smart phones
  • disabled people using technology
  • synthetic voice of booksense
  • technology for legally blind students
  • audio reading devices
  • reading text application
  • synthetic speech in mobile device
  • the use of technology and loss of eyesight
  • installer of message turn into narrator

NVDA screen reader and its voices

    Specific terms on NVDA reaching this blog:

  • NVDA accessibility review
  • voices for nvda
  • nvda windows screen reader+festival tts 1
  • videos of non visual desktop access
  • lag in screen reader speaking keys
  • nvda education accessibility

Terminology: screen reader software provides audio feedback by synthetic voice to users operating primarily on a keyboard, announcing events, listing menus, and reading globs of text.


How is NVDA progressing as a tool for Vision Losers?
Very well with increased acceptance. NVDA (non Visual Desktop Access) is a free screen reader developing under an international project of innovative and energetic participants with support from Mozilla and Yahoo!. I use NVDA for all my web browsing and Windows work, although I probably spend more hours with nonPC devices like the Levelstar Icon for Twitter, email, news, RSS as well as bookSense and Bookport for reading and podcast listening. NVDA continues to be easy to install, responsive, gradually gaining capabilities like Flash and PDF, but occasionally choking from memory hog applications and heavy duty file transfers. Rarely do I think I’m failing from NVDA limitations but I must continually upgrade my skills and complaint about website accessibility (oops, there I go again). Go to:

The voice issue for NVDA is its default startup with a free open source synthesizer called eSpeak. The very flexible youngsters living with TTS (text-to-speech) their whole lives are fine with this responsive voice which can be carried anywhere on a memory stick and adapted for many languages. However, oldsters often suffer from Synthetic voice shock” and run away from the offensive voices. Now devices like Amazon Kindle and the iPod/iTouch gadgets use a Nuance-branded voice quality between eSpeak and even more natural voices from Neo Speech, ATT, and other vendors. Frankly, this senior citizen prefers older robotic style voices for book reading especially when managed by excellent firmware like Bookport Classic from APH. Here’s the deal: (1) give eSpeak a chance then (2) investigate better voices available at Voice and TextAloud Store at Nextup.com. Look carefully at licensing as some voices work only with specific applications. The main thing to remember is that your brain can adapt to listening via TTS with some practice and then you’ll have a world of books, web pages, newspapers, etc. plus this marvelous screen reader.

Apple Mania effects on Vision Losers

Translation:What are the pro and con arguments for switching to Apple computers and handheld devices for their built in TTS?
Good question. Screenless Switcher is a movement of visually impaired people off PCs to Macs because the latest Mac OS offers VoiceOver text-to-speech built in. Moreover, the same capabilities are available on the iPhone, iTouch, and iPad, with different specific voices. Frankly, I don’t have experience to feel comfortable with VoiceOver nor knowledge of how many apps actually use the built-in capabilities. I’m just starting to use an iTouch (iPod Touch) solely for experimentation and evaluation. So far, I haven’t got the hang of it, drawing my training from podcasts demonstrating iPhone and iTouch. Although I consider myself skilled at using TTS and synthetic speech, I have trouble accurately understanding the voice on the iTouch, necessary to comfortably blend with gesturing around a tiny screen and, gulp, onscreen keyboard. There’s a chicken-and-egg problem here as I need enough apps and content to make the iTouch compelling to gain usage fluency but need more fluency and comfort to get the apps that might hook me. In other words, I’m suffering from mild synthetic voice shock compounded by gesture shyness and iTunes overload.


My biggest reservation is the iTunes strong hold on content and apps because iTunes is a royal mess and not entirely accessible on Windows, not to mention wanting to sell things I can get for free. Instead of iTunes, I get my podcasts in the Levelstar Icon RSS client and move them freely to other devices like the Booksense. Like many others with long Internet experrience, such as RSS creator and web tech critic Dave Winer, I am uncomfortable at Apple’s controlling content and applications and our very own materials, limiting users to consumers and not fostering their own creativity. Could I produce this blog on an iPad? I don’t know. Also, Apple’s very innovative approach to design doesn’t result in much help to the web as a whole where everybody is considered competitors rather than collaborators for Apple’s market share. Great company and products, but not compelling to me. The Google OS Android marketplace is more open and will rescue many apps also developed for Apple products but doesn’t seem to be yet accessible at a basic level or in available apps. Maybe 2010 is the year to just listen and learn while these devices and software and markets develop while I continue to live comfortably on my Windows PC, Icon Mobile Manager and docking station, and book readers. Oh, yeah, I’m also interested in Gnome accessibility, but that’s a future story.

The glorious talking ATM

Terms used to reach this blog

  • talking ATM instructions
  • security features for blind in ATM


What could be more liberating than to walk up to a bank ATM and transact your business even if you cannot see the screen? Well, this is happening many locations and is an example for the next stage of independence: store checkout systems. Here’s my experience. Someone from the bank or experienced user needs to show you where and how to insert your card and ear buds plug. After that the ATM should provide instructions on voice adjustment and menu operations. You won’t be popular if you practice first time at a busy location or time of day, but after that you should be as fast as anybody fumbling around from inside a car or just walking by. Two pieces of advice: (1) pay particular attention to CANCEL so you can get away gracefully at any moment and (2) always remove ear buds before striding off with your cash. I’ve had a few problems: an out of paper or mis-feed doesn’t deliver a requested receipt, the insert card protocol changed from inline and hold to insert and remove, an unwanted offer of a credit card delayed transaction completion, and it’s hard to tell when a station is completely offline. I’ve also dropped the card, sent my cane rolling under a car, and been recorded in profanity and gestures by the surveillance camera. My biggest security concern, given the usual afternoon traffic in the ATM parking lot, is the failure to eject or catch a receipt, which I no longer request. But overall, conquering the ATM is a great step for any Vision Loser. It would also work for MP3 addicts who cannot see the screen on a sunny day.

Using WordPress</h4

Terms:

    >

  • Wordpress blogging platform accessibility >

  • wordpress widget for visual impaired

Translation: (1) Does WordPress have a widget for blog readers with vision impairments, e.g. to increase contrast or text size? (2) Does WordPress editing have adjustments for bloggers with vision impairment?


(2) Yes, ‘screen settings’ provides alternative modes of interaction, e.g. drag and drop uses a combo to indicate position in a selected navigation bar. In general, although each blog post has many panels of editing, e.g. for tags, title, text, visibility, etc. these are arranged in groups often collapsed until clicked for editing, if needed. Parts of the page are labeled with headings (yay, H2, H3,…) that enable a blog writer with a screen reader to navigate rapidly around the page. Overall, good job, WordPress!


However, (1) blog reader accessibility is a bit more problematic. My twitter community often asks for the most accessible theme but doesn’t seem to converge on an answer. Using myself as tester, I find WordPress blogs easy to navigate by headings and links using the NVDA screen reader. But I’m not reading by eyesight so cannot tell how well my own blog looks to either sighted people or ones adjusting fonts and contrasts. Any feedback would be appreciated, but so far no complaints. Frankly, I think blogs as posts separated by headings are ideal for screen reading and better than scrolling if articles are long, like mine. Sighted people don’t grok the semantics of H2 for posts, h3, etc. for subsections, etc. My pet peeve is themes that place long navigation sidebars *before* the contnent rather than to the right. When using a screen reader I need to bypass these and the situation is even worse when the page downloads as a post to my RSS clinet. So, recommendation on WordPress theme: 2 column with content preceding navigation, except for header title and About.

Books. iBooks, eBooks, Kindle, Google Book Search, DAISY, etc.

Terms

  • kindle+accessibility
  • how to snapshot page in google book
  • is kindle suitable for the visually impaired?
  • how to unlock books “from kindle” 1
  • is a kindle good for partially blind peo 1
  • access ability of the kindle

I’ll return to this broad term of readers and reading in a later post. Meantime, here’s an Nytimes Op article on life cycle and ecosystem costs of print and electronic books. My concern is that getting a book into one’s sensory system, whether by vision or audio, is only the first step in reading any material. I’m working on a checklist for choices and evaluation of qualities of reading. More later.

Searching deeper into Google using the Controversy Discovery Engine

You know how the first several results from a Google search are often institutions promoting products or summaries from top ranked websites? These are often helpful but even more useful, substantive, and controversial aspects may be pushed far down in the search list pages. There’s a way to bring these more analytic pages to the surface by easily extending the search terms with words that rarely appear in promotional articles, terms that revolve around controversy and evidence. Controversy Discovery engine assists this expanded searching. Just type in the term as you would to Google and choose from one or both lists of synonym clusters to add to the term. The magic here is nothing more than asking for more detailed and analytic language in the search results. You are free to download this page to your own desktop to avoid any additional tracking of search results through its host site and to have it available any time or if you want to modify its lexicon of synonyms.
Some examples:

  1. “print disability” + dispute
  2. “legally blind” + evidence Search
  3. “NVDA screen reader” + research Search
  4. “white cane” + opinion Search
  5. “Amazon Kindle” accessibility + controversy Search

    Feedback would be much appreciated if you find this deeper search useful.

    Adjustment themes: canes, orientation and mobility, accessibility advocacy, social media, voting, resilience, memories, …

    Coming in next post!

Grafting web accessibility onto computer science education

December 7, 2009

Note: this is a long post with webliography in the next article.
There is also a recorded tour of CS web sites as an MP3 download.

Understanding web accessibility through computational Thinking


This post is written for distribution during the first proclaimed National computer science education week, December 7, 2009. My goal is to stimulate awareness within the CSE community of the importance of web and software accessibility to society at large and to the proper development of associated skills within CS curricula. Taking this further, I offer a call to action to renovate our own websites for purposes of (1) improved service, (2) learning and practice, and (3) dissemination of lessons learned to other academic entities, including professional organizations.


recognizing that traditional, accredited CS curricula do not define a role for accessibility, I suggest actions that can be grafted into courses as exercises, readings, debates, and projects. To even more legitimize and improve uptake of accessibility, many of these problems can be cast as computational Thinking in the framework of drivers from society, technology, and science.

Definitions and Caveats

Caveat: I do not represent the blindness communities, standards groups, or any funding agency.
Also, I limit this accessibility context to the USA and visual impairment disability.

here is my personal definition framework:

  • Definition: disability = inability to independently perform daily living tasks due to physical or mental causes

    example: I cannot usually read print in books or news, nor text on a computer screen at size 14

    Example: I cannot usually follow a mouse cursor to a button or line of text to edit

  • Definition: Assistive Technology (AT) = hardware or software that overcomes some limits of a disability

    example: A screen magnifier can track a mouse cursor then smooth and enlarge text in the cursor region

    Example: A screen reader can announce screen events and read text using synthetic speech

  • Definition: Accessibility = Quality of hardware and software to (1) enable assistive technology and also (2) support the AT user to the full extent of their skills without unnecessary expenditure of personal energy

    example: A web page that enables focus through keyboard events enables a screen reader to assist a user to operate the page with ease, provided hands are working. Same is true for sighted users.

    Example A screen magnifier enables reading text and screen objects but at such a low rate that I cannot accomplish much usual work:

    Note: I am conflating accessibility with usability here, with usability usually referring beyond disabilities. Informally, to me, “accessibility” means my screen reader is fully operational, not in the way, and there are no reasons I cannot achieve the goal of page success as well as anybody.

  • Definition: Accommodation = explicit human decisions and actions to accomplish accessibility

    Example: Modifying a web page enhances comprehension for a screen reader user, see POSH computational thinking below

    Ecxample: Adapting security settings on a PC to permit a job applicant with a screen reader on a pen drive to read instructions and complete tests and forms

    Example: A curb cut in a sidewalk enables wheelchairs to moor easily cross streets. Also true for baby strollers, inattentive pedestrians, visually impaired, luggage carts, skateboards, etc.


I base my analysis and recommendations on several domains of knowledge:

  • Learning and acquisition of skills as a recent vision Loser, becoming “print disabled”, “legally blind”, now at an intermediate skill level

  • Computer scientist, active for decades in formal methods and testing, highly related to “computational thinking” with broader professional experience in design methods and technology transfer.

  • Intermittent computer science and software engineering educator at undergraduate and master’s level programs with experience and opinions on accreditation, course contents, student projects, and associated research

  • Accelerated self-study and survival training from the community of persons with disabilities, the industry and professions serving them, and the means for activism based in social media like twitter, blogs, and podcasts

  • Lingering awareness of my own failings before my vision loss, including software without accessibility hooks, web pages lacking structural/semantic markup, and , worst of all, omission of accessibility considerations from most courses and projects. My personal glass house lies in slivers around me as I shout “if only I knew then, when I was professionally active, what I know now, as a semi-retiree living with the consequences and continuing failures of my profession.

what is “computational thinking” and what does it have to do with accessibility?

This term was coined by dr. Jeannette wing in a 2006 article, and best expressed in her
Royal society presentation and podcast conversations. for our purposes, CT asks for more precise description of abstractions used in assistive technology, web design, and mainstream browsers, etc. The gold standard of web accessibility for my personal kind of disability, shared with millions of Americans, is the bottom line of reading and interacting with web sites as well as currently normally sighted persons. To an amazing degree, audio and hearing replaces pixels and seeing provided designs do support cooperation of assistive technology at both primitive levels and costs for effort expended. I’ll illustrate some fledgling computational thinking in a later section and by touring CS and other websites, but, sorry, this won’t be a very pleasant experience for either me the performer or listeners.


CSE can benefit from the more rigorous application of CT to meet its societal obligations while opening up new areas of research in science and technology leading to more universal designs for everybody. To emphasize, however, this is not a venture requiring more research before vast improvements can be achieved, but rather a challenge to educators to take ownership and produce more aware computing professionals. …

Driving forces of society, Technology, and science


Here’s a summary of trends and issues worthy of attention within CSE and suggested actions that might be grafted appropriately.

driving forces from society

computer science education has a knowledge gap regarding accessibility


As excellently argued in a course description “Accessibility First”, web design in general, accessibility, and assistive technology are at best service learning or research specialties falling under human computer interface or robotics. where do Cs students gain exposure to human differences, the ethics of producing and managing systems usable by everybody, and the challenges of exploring design spaces with universal intentions.


The extensive webliography below offers the best examples I could find, so please add others as comments. Note that I do not reference digital libraries because (1) the major ACM Portal is accessibility deficient itself and (2) I object to the practice of professional contributions being available only at a charge. The practice of professional society control over publications forces a gulf between academic researchers and a vibrant community of practitioners, including designers, tool builders, accessibility consultants and activists.


Action: Use the above definition framework to describe the characteristics of the following as ordinary or assistive: keyboards, tablets with stylus, onscreen keyboard, mouse, screens, fonts, gestures, etc. How do these interfaces serve (1) product developers and (2) product users? Where is the line between assistive and mainstream technology?


Action: see the proposed expansion of the National computer Science education proclamation in our conclusions. Debate the merits of both the whereas assumptions the therefore call to action. Are these already principles adopted and practiced within CSE?

Disability is so prevalent that accessibility is a uniform product requirement.

Being disabled is common, an estimated 15% of U.S.A. population with serious enough visual impairment to require adjustments from sites designed assuming full capabilities of acuity, contrast, and color. Eyesight changes are inevitable throughout life, even without underlying conditions such as macular degeneration or severe myopia. Visual abilities vary also with ambient conditions such as lighting, glare, and now size and brightness of small screens on mobile devices. considering other impairments, a broken arm, carpal tunnel injury, or muscle weakness give a different appreciation for interaction with a mouse, keyboard, or touch screen. As often said, we will all be disabled some way if we live long enough. Understanding of human differences is essential to production of good software, hardware, and documentation. Luckily, there are increasingly more specimens, like me, willing to expose and explain my differing abilities and a vast library of demonstrations recorded in podcasts and videos.


Action: view You tube videos such as the blind web designer using a screen reader explaining the importance of headings on web pages. Summarize the differences in how he operates from currently sighted web users. How expensive is the use of Headings? See more later in our discussion of CT for Headings.


Action: visit or invite the professionals from your organization’s Disability services, Learning center, or whatever it is called. These specialists can explain disabilities, assistive technology, educational adjustments, and legal requirements.


Action: Is accessibility for everybody, everywhere, all the time a reasonable requirement? What are the ethics and tradeoffs of a decision against accommodation? What are the responsibilities of those requiring accommodations?

The ‘curb cut’ principle suggests how accessibility is better for everyone


Curb cuts for wheelchairs also guide blind persons into street crossings and prevent accidents for baby strollers, bicyclists, skateboarders, and inattentive walkers. The “curb cuts” principle is that removing a barrier for persons with disabilities improves the situation for everybody. This hypothesis suggests erasing the line that labels some technologies as assistive and certain practices as accessibility to maximize the benefits for future users of all computer-enabled devices. This paradigm requires a new theory of design that recognizes accessibility flaws as unexplored areas of the design space, potential harbingers of complexity and quality loss, plus opportunities for innovation in architectures and interfaces. Additionally, web accessibility ennobles our profession and is just good for business.


Action: List physical barriers and adaptations in your vicinity, not only curb cuts, but signage, safety signals, and personal helpers. Identify how these accommodate people with canes, wheelchairs, service animals, etc. And also identify ways these are either helpful or hampering individuals without disabilities. Look at settings of computers and media used by instructors in classrooms. Maybe a scavenger hunt is a good way to collect empirical physical information and heighten awareness.


Action: Identify assistive technology and accessibility techniques that are also useful for reasons different from accessibility? e.g. A keyboard enabled web page or browser tabs support power users.

Persons with disabilities assert their civil rights to improve technology.


while most of us dislike lawsuits and lawyers, laws are continuously tested and updated to deal with conflicts, omissions, and harm. Often these are great educational opportunities on both the challenges of living with disabilities and the engineering modifications, sometimes minor, for accommodations. Commercial websites like amazon, iTunes, the Law School aptitude test, small business administration, and Target are forcefully reminded that customers are driven away by inaccessibility of graphics, menus, forms, and shopping carts. Conversely, recently, I had a quick and easy checkout from a Yahoo small business website, greatly raising my respect and future return likelihood whenever I see the product vendor and website provider.


Devices such as controllers on communication systems, the amazon Kindle, and new software like google WAVE and chrome browser often launch with only accessibility promises, excluding offensively and missing feedback opportunities from persons with disabilities. Over and over, it is shown that the proverbial software rule of increasing cost of fixing missing requirements late is exemplified by accessibility, whether legal or business motivated. While a lawsuit can amazingly accelerate accessibility, companies with vast resources like Microsoft, Oracle, blackboard, and google are now pitted in accessibility races with Yahoo, apple, and others. The bar is rapidly being raised by activism and innovation.


for many The social good of enabling equal access to computing is an attractor to a field renowned for nerds and greed. Social entrepreneurs offer an expansive sense of opening doors to not only education and entertainment but also employment, that now stands around 20% for disabled persons. Many innovative nonprofit organizations take  advantage of copyright exemptions building libraries and technology aids for alternatives to print and traditional reading.  


The computing curb cuts principle can motivate professionals, services, and end users to achieve the potential beauty and magic of computing in everyday life, globally, and for everybody who will eventually make the transition into some form of sensory, motor, or mental deficiency. But, first, mainstream computing must open its knowledge and career paths to encompass the visionaries and advances now segregated. All too often persons with disabilities are more advanced, diversified, and skill full in ways that could benefit not yet disabled people.


Action: The ubiquitous bank ATM offers a well documented ten year case study of how mediation led to a great improvement in independent living. for visually impaired people. Take those ear buds out of the MP3 player and try them on a local ATM, asking for service help if needed or ATM is not voice enabled. Using a voice enabled ATM also provides insight into the far more problematic area of electronic voting systems.


Action:
the amazon Kindle lawsuit by blind advocates against universities considering, or rejecting, the
device and its textbook market provides a good subject for debate.


Action: On the home front, pedagogical advances claimed for visual programming languages like Alice are not equally available to visually impaired students and teachers. first, is this a true assertion? How does this situation fit the definition of equal or equivalent access to educational opportunities? should the platform and implementation be redone for accessibility? Note: I’ve personally seen a student rapidly learn OO concepts and sat in on Cs1 courses with Alice, but I am totally helpless with only a bright, silent blob on the screen after download. Yes, I’ve spoken to SIGCSE and Alice personnel, suggested accessibility options, but never received a response on what happens to the blind student who signs up for an Alice-based CS course. Please comment if you have relevant experience with accommodations and Alice or other direct manipulation techniques.

The Web has evolved a strong set of standards and community of supporters.

W3c led efforts are now at 2.0 with an evolved suite of standards products, including documents, validator’s, and design tools. standards go a long way enabling accessibility by both their prescriptions and rationales, often drawing on scientific principles, such as color perception. but the essence of web standards is to define the contracts among browsers and related web technologies that enables designers to predict the appearance of and interaction with their designed sites and pages. The theme of WCAG 2.0 sums up as Perceivable, Operable, Understandable, and Robust. we all owe a debt to the Web standards Mafia for their technical contributions, forceful advocacy to vendors, and extensive continuing education.


Web standards are sufficiently mature, socially necessary, and business worthy that open, grassroots motivated curricula are being defined. CSE people who understand CT may well be able to contribute to this effort uniquely. In any case, questions about the relationship of tradition CS education and this independent curriculum movement must be addressed considering the large workforce of web designers, including accessibility specialists. Furthermore, web design inherently requires close designer and client communication, making it difficult to offshore into different culture settings.


Action: Use the #accessibility and #a11y hash tags on twitter to track the latest community discussions, mostly presented in blogs and podcasts. Pick a problem, like data tables, to learn the accessibility issues from these experts. find and create good and bad examples, but note you may need a screen reader software for this. can you characterize the alternatives and tradeoffs in CT terms?


Action: Create or try some web page features in several different browsers. Notice the differences in appearance and operation. Which sections of WCAG apply to noticeable differences or similarities?


Action: What is the career connection of computer science and web design? What are the demographics, salary, portability, and other qualities of web design versus traditional CS and SE jobs?

Transparency and dissemination of federal government data is drawing attention to accessibility

First, a remodeled whitehouse.gov drew accolades and criticisms. New websites like data.gov and recovery.gov appeared to reinforce the Obama administration promises. Disability.gov showed up on my radar screen through its Twitter flow. All these web sources, are now in my RSS feed reading regime. But the websites seem to be still behind on some aspects of accessibility, and under scrutiny by activists, including me. Personally, I’d be satisfied with a common form for requesting data and services, not the elements itself but well evolved interaction patterns through feedback and validation. More importantly, the data sets and analyses are challenging for visually impaired people, suggesting even new scientific research and novel technology to utilize alterative non-visual senses and brain power.


Additionally, innovation in assistive technology and accessibility is recognized at the National Center for Technology Innovation, with emphasis on portability and convergence with mainstream technology. Indeed, apparently, there are stimulus funds available in education and in communication systems.


Action: Visit the various USG cabinet department websites and then write down your main perception of their quality and ability to answer questions.


Action: Find examples of USG website forms users fill out for contacts, download of data sets, mailing lists, etc. How easy is filling out the forms> what mistakes do you make? How long does each take? Which forms are best and worst?

Action:
Check out on recovery.gov whether any stimulus funds are being spent on assistive technology. Or perhaps that information is on Deptart of Education sites as plans or solicitations.

Mainstream and assistive technologies are beginning to cross over.


BusinessWeek notes a number of examples:
Clearly mobile devices are driving this change. Embedding VoiceOver in Mac OS, transferred then to products like IPod Touch, has motivated a number of blind “screenless switchers”. Google calls its version on Android “eyes-free”. For those long stuck in the “blindness ghetto” of products costing $1000s with small company support and marketing chains through disability support service purveyors, this is a big deal. Conversely, although limited by terms of amendment under the Chafee agreement, members of Bookshare have enjoyed access to a rapidly growing library of texts, really XML documents, read in synthetic speech by now pocket size devices than cross Kindle and IPod capabilities. There’s never been a better time to lose some vision if one is a technology adopter willing to spend off retirement funds to remain active and well informed. The aging baby boomer generation that drives USA cost concerns will be a vast market in need of keeping up with the government flow of information, electronic documentation, not to mention younger generations.


But, while this Vision Loser is happy with the technology trend, to those disabled around the world working with older or non-existent computing environments this and free, open source trends make truly life changing differences.


Action: What are the job qualifications for working in the areas of assistive technology and accessibility? Is this business are growing, and in what regions of the USA or the world?

Technology drivers

social media opens the culture of disability and the assistive markets for all computing professionals to explore.


while the cultures of disability may operate separate systems of societies and websites, in the case of vision impairment, the resources are right there for everybody to learn from, primarily by demos disseminated as podcasts by blind cool Tech, accessible world, and vendors. several annual conferences feature free exhibit halls visited by disability professionals, independent disabled like me, and luminaries like stevie wonder. cSUN is the biggest and a good place to get vendor and product lists. Again, many products can be seen in local disability support services. Local computer societies and CS courses may find well equipped people who can present like my Using things that Talk. This is a vibrant world of marketing closely couple with users, highly professional demos, and innovative developers, often disabled themselves. I personally treasure shaking hands with and thanking the young blind guys behind my Levelstar Icon and NVDA screen readers. Also, mailing lists are to various degrees helpful to the newly disabled, and rarely particular about age and gender. it’s a great technology culture to be forced into.

Action: Whenever you’re in a large enough city, visit their local vision training centers. I think you’ll be welcome, and might leave as a volunteer.


Action: With well over a thousand podcasts, dozens of blogs, and a regular tweet stream, the entry points for learning are abundant. However, the terminology and styles of presenters and presentations vary widely. Consider an example, often used in computer science, like David Harel’s watch, the microwave oven, or elevator controller. How do the state diagrams manifest in speech interfaces? Can you reverse engineer device descriptions using computational thinking? How could this help disabled users or accessibility providers?

Text-to-speech (TTS) is a mature technology with commodity voices.


Screen reader users rely on software implemented speech engines which use data files of word-to-sound mappings, i.e. voices. built into Mac Os, and widely available in windows and Linux, this mature technology supports a marketplace of voices available in open source or purchased with varying degrees of licensing, at a cost of about $25. comparable engines and voices are the main output channel of mobile assistive devices, like now I type on the Levelstar Icon. web pages, books, dialogs, email, … reading is all in our mind through our ears, not our eyes. An amazing and not yet widely appreciated breakthrough of a lineage of speech pioneers dating back to 1939 through DecTalk ATT Natural voices and now interactions with voice

recognition.


Action: Wikipedia has a great chronology and description of synthetic speech. Track this with Moore’s law and the changes of technology over decades.


Action: Compare synthetic voices, e.g. using samples from vendor nextup.com or the ‘As Your World Changes’ blog samples.

Processor and storage enable more and more talking devices. why not everything?

Alarm clocks, microwave ovens, thermostats, and
many more everyday objects are speech enabled to some degree, see the demos on blind cool Tech and accessible world. I carry my library of 1000+ books everywhere in a candy bar sized screen-less device. but why stop until these devices are wirelessly connected with meaningful contextual networks. Thermostats could relay information about climate and weather trends, power company and power grid situations, and feedback on settings and recommended adjustments. Devices can carry their own manuals and training.


Action: Listen to podcasts on blind cool Tech and accessible world about talking devices and how they are in use by visually impaired people. Reverse engineer the devices into state machines, use cases, and write conversations between devices and users in “natural language”, assuming ease of speech output.


Action: Inventory some devices that might be redesigned for talking, even talkative. Electrical or chemical laboratory instruments, medical devices, home appliances, cars and other moving things, etc. But what would these devices speak? How do they avoid noise pollution? interference? annoyance?


Action: Computer science researchers are great at devising advanced solutions that provide service to relatively few disabled persons. For example, I have no use of GPS because if I’m somewhere I don’t know, I’m in bigger trouble than needing coordinates. This would b different in a city with public transportation, maybe. How do we evaluate technology solutions with the user, not the technology purveyor, as the main beneficiary?

Pivotal technology for visually impaired, the screen reader, is rapidly evolving through open source

A screen reader doesn’t really read pixels but rather the interfaces and objects in the browser and desktop. GUI objects expose their behaviors and properties for the screen reader to read and operate via TTS. Listen to the demos of Cs websites you may be familiar with. Unfortunately the marketplace for screen readers has been priced at over $1000 with steep SMA updates and limits in trials and distribution. Products largely sold to rehab and disability services passed on to users, with limited sales to individuals. This is a killer situation for older adults who find themselves needing assistance but without the social services available to veterans, students, and employee mandated. Worse, product patents are being employed by lawyers and company owners (some non USA) as competitive lawsuits.

however, the world has changed with the development over the past few years of NVDA, Non visual desktop access, originating in Australia with grants from Mozilla, then yahoo and Microsoft. A worldwide user community adapts NVDA for locale and Tts languages, with constant feedback to core developers. gradually, through both modern languages (Python) and browser developer collaborations, NVDA is challenging the market. You can’t beat free, portable, and easily installed if the product works well enough, as NVDA has for me since 2007. It’s fun to watch and support an agile upstart, as the industry is constantly changing with new web technologies like ARIA. The main problem with NVDA is robustness in the competing pools for memory resources and inevitable Windows restarts and unwanted updates.

Action: download and install NVDA. Listen to demos to learn its use. You will probably need to upgrade TTS voices from its distributed, also open, Espeak.

Action: learn how to test web pages with NVDA, with tutorials available from Webaim and Firefox. Define testing criteria (see standards) and processes. Note: good area here for new educational material, building on CS and SE testing theories and practices.


Action: develop testing practices, tools, and theories for NVDA itself. since screen readers are abstraction oriented, CT rigor could help.


Action: Modify NVDA to provide complexity and cost information. Is there a Magic Metric that NVDA could apply to determine with, say 80% agreement with visually impaired users, that a page was OK, DoOver, or of questionable quality in some respect?

structured text enables book and news reading in a variety of devices..


DAISY is a specification widely implemented to represent books, newspapers, magazines, manuals, etc. Although few documents fully exploit its structuring capabilities, in principle, a hierarchy of levels with headings allows rapid navigation of large textual objects. for example, the Sunday NY Times, has 20 sections, editorials, automobiles, obituaries, etc. separated into articles. Reading involves arrowing to interesting sections, selecting articles, listening in TTS until end of article or nauseous click to next article. books arrive as folders of size usually less than 1 MB. reader devices and software manage bookmarks, possibly in recorded voice, and last stopping point, causes by user action or sleep timer. In addition to audible and National narrated reading services with DRM, The TTS reading regime offers a rich world from 60,000+ books contributed by volunteers and publishers to bookshare and soon over 1M DAISY formatted public books through bookserver.org.
These are not directly web accessibility capabilities as in browsers but rather do read HTML as text, support RS’s reading of articles on blogs, and include browsers with certain limits, as in no Flash.
Over time, these devices contribute to improved speech synthesis for use everywhere, including replacement of human voice organs. Steven Hawking, blogger heroine ‘left thumbed blogger’ Glenda with cerebral palsy, and others use computer and mobile devices to simply communicate speech.


Action: Listen to podcasts demos of devices like Icon, booksense, Plextalk, Victor stream. What capabilities make reading possible, tolerable, or pleasant? Voice, speed, flexibility, cost, access, …?

Accessibility tools are available, corresponding to static analyzers and style checkers for code.

While not uniformly agreeing, accurate, or helpful, standards groups provide online validator’s to “test” accessibility. For example, WAVE from webaim.org, marks up a page with comments derived from web standards guidelines, like “problematic link”, “unmatched brackets”, java script interactions (if java script disabled), header outline anomalies, missing graphic explanations, small or invisible text. It’s easy to use this checker, just fill in the URL. However, interpreting results takes some skill and knowledge. Just as with a static analyzer, there are false hits, warnings where the real problem is elsewhere, and a tendency to drive developers into details that miss the main flaws. Passing with clean marks is also not sufficient as a page may still be overly complex or incomprehensible.


Action: Below is a list of websites from my recorded tour. Copy the link into WebAim.org WAVE (not the Google one) and track the markup and messages to my complaints or other problems. show how you would redesign the page, if necessary, using this feedback.


Action: redesign the ACM digital library and portal in a shadow website to show how a modern use of structured HTML would help.


Action: consider alternatives to PDF delivery formats. Would articles be more or less usable in DAISY?


Action: design suites of use cases for alternative digital libraries of computer science content. which library or search engine is most cost effective for maintenance and users?

science drivers

Understanding of brain plasticity suggests new ways of managing disabilities

Brain science should explain the unexpected effectiveness and pleasure of reading without vision.


My personal story. Although I was experimenting with TTS reading of web pages, I had little appreciation, probably induced by denial, of how I could ever read books or long articles in their entirety. since it was
only a few weeks after I gave up on my Newsweek and reading on archetypes until my retina specialist pronounced me beyond the acuity level of legal blindness, I only briefly flirted with magnifiers, the trade of low vision specialists. rather, upon advice of another legally blind professional I met through her book and podcasts interviews, I immediately joined the wonderful nonprofit bookshare.org. A few trials with some very good synthetic voices and clunky PC-based software book readers lead me to the best at that time handheld device, the Bookport from APH, American Printing House for the blind. within weeks, I was scouring bookshare, then around 20,000 volumes, for my favorite authors and, wonders be, best sellers to download to my bookport. At first, I abhorred the synthetic voice, but if that was all that stood between me and regular reading, I could grow to love old precious Paul. going on 4 years, 2 GB of books, and a spare of the discontinued bookport, I still risk strangulation from ear buds at night with bookport beside me. Two book clubs broadened my reading into deeper unfamiliar nonfiction terrain and the Levelstar Icon became my main retriever from bookshare, now up to 60,000 volumes with many teenage series and nationally available school textbooks. I tell this story not only to encourage others losing vision, but also as a testimonial to the fact that I I am totally and continually amazed and appreciative that my brain morphed so easily from visual reading of printed books to TTS renditions in older robotic style voices. I really don’t believe my brain knows the difference about plot, characters, and details with the exception of difficult proper names and tables of data (more later). Neuroscientists and educators write books about the evolution of print but rarely delve into these questions of effectiveness and pleasure of pure reading by TTS. The best exceptional research is Clifford Nass A ‘wire for speech’ on how our brains react to gender, ethnicity, age, emotion, and other factors of synthetic speech. such a fascinating topic!

Action: Listen to some of the samples of synthetic speech on my website, e.g. the blockbuster ‘Lost symbol’ sample. Which voices affect your understanding of the content? How much do you absorb compared with reading the text sample? Extrapolate into reading the whole book using the voices you prefer, or can tolerate, and consider how you might appreciate the book plot, characters, and scenery Do you prefer male or female voices? Why?.

Numerical literacy is an open challenge for visual disability.

I personally encountered this problem trying to discuss a retirement report based around asset allocations expressed in pie charts. Now, I understand charts well, even programmed a chart tool. But I could find no way to replace the fluency of seeing a pie chart by reading the equivalent data in a table. This form of literacy, a form of numeracy, needs more work in the area of Trans-literacy, using multiple forms of perception and mental reasoning. Yes, a pie chart can be rendered in tactile form, like Braille pin devices, but these are still expensive. Sound can convey some properties, but these depend on good hearing and a different part of the brain. Personally, I’d like to experiment with a widget operated by keyboard, primarily arrow keys, that also read numbers with different pitches, voices, volume, or other parameters. The escalating sound of a progress bar is available in my screen reader, for example. Is there a composite survey somewhere of alternative senses and brain training to replace reading charts? Could this be available in the mainstream technology market? How many disabilities or educational deficiencies of education and training might also be addressed in otherwise not disabled people?
Is there an app for that?


Action: Inventory graphical examples where data tables or other structures provide sufficient alternatives to charts? Prototype a keyboard-driven, speech-enabled widget for interaction with chart like representations of data. Thank you for using me as a test subject.


Action: Moving from charts to general diagrams, how can blind students learn equivalent data structures like lists, graphs, state machines, etc.?

Web science needs accessibility criteria and vice versa.


The web is a vast system of artifacts, of varying ages,
HTML generations, human and software generated, important, etc. could current site and page accessibility evaluation scale to billions of pages in a sweep of accessibility improvement?
Surveys currently profile how screen readers are used and the distribution of HTML element usage.


Do a web search, in bing, Yahoo, google, or dogpile, whatever, and you’ll probably find a satisficing page , and a lot you wish not to visit or never visit again. Multiply that effort by , say 10, for every page that’s poorly designed or inaccessible to consider the search experience of the visually impaired. Suppose also that the design flaws that count as accessibility failures also manifest as stumbles or confusion for newer or less experience searchers. Now consider the failure rate of serious flaws of, , say, 90% of all pages. Whew, there’s a lot of barriers and waste in them there web sites.


experienced accessibility analysts , like found on webAxe podcasts and blog, can sort out good, bad, and just problematic features. Automated validation tools can point out many outright problems and hint at deeper design troubles.


Let’s up the level and assume we could triage the whole web, yep, all billions of pages as matched with experimental results of real evaluators, say visually impaired web heads like me and those accessibility experts. This magic metric, MM, has three levels: OK, no show stoppers by human evaluators; at 80% agreement; DO OVER, again with human evaluators 80% agreement of awfulness; and remaining requiring reconciliation of human and metric. Suppose an independent crawler or search engine robot used this MM to tag sites and pages. probably nothing would happen. but if…

Action: declare a week of clean Up the web, where the MM invokes real Acton to perform “do over” or “reconcile”. Now, we’re paying attention to design factors that really matter and instigating serious design thought. All good, all we need is that MM.

Action: which profession produces the most accessible pages, services, and sites? computer scientists seem to be consistently remiss on headings, but are chemists or literary analysts any better? If acm.org is as bad as I claim, are other professional societies more concerned about quality of service to their members? what are they doing the same or differently?
How does the quality of accessibility affect the science of design as applied to web pages, sites, and applications?

Accessibility needs a Science of Design and Vice Versa


Accessibility concerns often lead into productive unexplored design regions.
Accessibility and usability are well defined if underused principles of product quality.  The ‘curb cuts’ principle suggests that a defect with respect to these qualities is in a poorly understood or unexplored area of a design. Often  a problem that presents only a little trouble for the expected “normal” user is a major hassle or show stopper for those with certain physical or cognitive deficiencies. However, those flaws compound and often invisibly reduce productivity for all users. Increasingly, these deficiencies arise from ambient environmental conditions such as glare, noise, and potential damage to users or devices.


Moreover, these problems may also indicate major flaws related to the integrity of a design and long term maintainability of the product. An example is the omission of Headings on an HTML page that makes it difficult to find content and navigation divisions with a screen reader. This flaw usually reveals an underlying lack of clarity about the purpose and structure of the website and page. Complexity and difficult usability often arise from missing and muddled use cases. Attitudes opposing checklist standards often lead to perpetuating poor practices such as the silly link label “click here”.


The ‘curb cuts’ principle leads toward a theory of design that  requires remedy of accessibility problems not as a kindness to users nor to meet a governmental regulation but rather to force exploration through difficult or novel parts of the design terrain. The paradigm of “universal design” demands attention to principles that should influence requirements, choice of technical frameworks, and attention to different aesthetics and other qualities.   For example, design principles may address  where responsibilities lie for speech information to a user, thus questioning whether alternative architectures should be considered. Applying this principle early and thoroughly potentially removes many warts of the product that now require clumsy and expensive accessibility grafts or do-overs.


Just as the design patterns movement grew from the architectural interests of Christopher Alexander, attention to universal design should help mature the fields for software and hardware. The “curb cuts” principle motivates designers to think beyond the trim looking curb to consider the functionality to really serve and attract ever more populations of end users.


The accessibility call for action, accommodation, translates into a different search space and broader criteria plus a more ethically or economically focused trade-off analysis. now, design is rarely explicitly exploration, criterion’s, or tradeoff-focused. but the qualitative questions of inclusive design often jolt designers into broader consider of design alternatives. web standards such as WCAG 2.0 provide ways to prune alternatives as well as generate generally accepted good alternatives. It’s that simple: stay within the rules, stray only if you understand the rationales for these rules, and temper trade-off analysis with empathy toward excluded users or hard cool acceptance of lost buyer or admirers. well, that’s not really so simple, but expresses why web standards groups are so important and helpful — pruning, generating, and rationalizing is their contribution to web designers professional effectiveness and peace of mind.


Action: Reconstruct a textbook design to identify assumptions about similarities and differences of users. Force the design to explore extremes such as missing or defective mouse and evaluate the robustness of the design.


Action: Find an example of a product that illustrates universal design. How were its design alternatives derived and evaluated?

revving Up our computational Thinking on accessibility

POSH (Plain Old semantic HTML) and headings

POSH focuses our attention on common structural elements of HTML that add
meaning to our content with Headings and Lists as regular features. An enormous
number of web pages are free of headings or careless about their use. The
general rule is to outline the page in a logical manner: h1, H2, h3,…,H6, in
hierarchical ordering.
why is this so important for accessibility?

  1. headings. support page abstraction. reaching a page, whether first or return
    visit, I, and many other screen reader users, take a ‘heading tour’. Using our ‘h’ key repeatedly to visit headings, gives a rapid-fire reading of the parts of the page and an
    introduction to the terminology of the web site and page content. bingo! a good
    heading tour and my brain has a mental map and a quick plan for achieving my
    purpose for being there. No headings and, argh, I have to learn the same thing
    through links and weaker structures like lists. At worst I need to tab along
    the focus trail of HTML elements, usually a top-bottom, left-right ordering.

  2. Page abstraction enables better than linear search if I know roughly what I
    want. for example, looking for colloquium talks on a Cs website is likely to
    succeed by heading toward News and Events, whatever. with likely a few dozen
    page parts, linear search is time and energy consuming, although often leading
    to interesting distractions.

  3. Page abstraction encourages thinking about cohesion of parts, where to
    modularize, how to describe parts, and consistent naming. This becomes
    especially important for page maintainers, and eventually page readers, when
    new links are added. Just like software design, cohesion and coupling plus
    naming help control maintenance. An example of where this goes wrong is the
    “bureaucratic guano” on many government web pages, where every administrator
    and program manager needs to leave their own links but nobody has the page
    structure as their main goal.

  4. while it’s not easy to prove, but plausible, SEO (search engine optimizers)
    claim headings play a role in page rankings. This appeals to good sense that
    words used in headings are more important so worth higher weights for search
    accuracy. It might also mean pages are better designed, but this is just
    conventional wisdom of users with accessibility needs.

so, we have abstraction, search, design quality, and metrics applied to the
simple old semantic HTML Heading construct.


Now, this rudimentary semantic use of Headings is the current best practice, supplementing the deprecated Accs Tags that all keyboard users can exploit to reach standard page locations, like search box and navigation. Rather, headings refine and define better supplements for access tags. Going further, the ARIA brand of HTML encourages so-called ‘landmarks’ which can also be toured and help structure complex page patterns such as search results. The NVDA screen reader reports landmarks as illustrated on AccessibleTwitter and Bookshare. Sites without even Headings appear quaint and deliberately unhelpful.

The Readable conference program Problem

I recently attended a conference of 3.5 days with about 7 tracks per session.
The document came as a PDF without markup, apparently derived from a WORD
document with intended use in printed form. Oh, yeah, it was 10MB download with
decorations and all conference info.


I was helpless to read this myself. yes, I could use the screen reader but
could not mentally keep in mind all the times and tracks and speakers and
topics. I couldn’t read like down Tracks or across sessions nor mark talks to
attend. Bummer, I needed a sighted reader and then still had to keep the
program in mind while attending.


A HTML version of the preliminary program was decidedly more usable. Hey, this is what hypertext is all about! Links from talks to tracks and sessions and vice versa, programs by days or half-days subdivided on pages, real HTML data tables with headers that can be interpreted by screen reader, albeit still slowly and painfully.
that’s better, but would be unpopular with sighted people who
wanted a stapled or folded printout.


OK, we know this is highly structured data so how about a database? This would
permit, with some SQL and HTML, wrapping, generation of multiple formats, e.g.
emphasizing tracks or sessions or topics,… But this wouldn’t likely distill
into a suitable printable document. Actually, MS WORD is programmable, so the
original route is still possible but not often considered. Of course, it’s often more difficult to enter data into forms for a database, but isn’t that what student helpers are for? Ditto the HTML generation from the database.


The best compromise might be using appropriate Header styles in WORD and
use an available DAISY export so the program in XML could be navigated in our
book readers.


This example points the persistent problem that PDF, which prints well and
downloads intact, is a bugger when it loses its logical structure. Sighted
readers see that structure, print disable people get just loads of text. This
is especially ironic when the parts originally had semantic markup lost in
translation to PDF, as occurs with NSF proposals.


so, here I’m trying to point out a number of abstraction problems, very
mundane, but amenable to an accommodation by abstracting to a database type of
model or fully exploiting markup and accessible format in WORD. Are there other
approaches? Does characterizing this problem in terms of trade-offs among abstractions and loss of structural information motivate computer scientists to approach their conference responsibilities different?


More generally, accessibility strongly suggests that HTML be the dominant document type on the web, with PDF, TXT, WORD, etc. As supplementary. Adobe and free lance consultants work very hard to explain how PDF may be made accessible, but that’s just not happening, nor will this replace probably millions of moldering PDFs. Besides negligent accessibility, forcing a user out of a browser into a separate application causes resources allocated and inevitable security updates.

Design by Progressive Enhancement&lt


‘Graceful degradation’ didn’t work for web design, e.g. when a browser has javascript turned off, or an older browser is used, or a browser uses a small screen. Web designers recast their process to focus on content first, then styles, and finally interactive scripting. There’s a lot more in the practitioner literature that might well be amenable to computational thinking, e.g. tools that support and ease the enhancement process as well as the reverse accommodation of browser limitations. Perhaps tests could be generated to work in conjunction with the free screen reader, to encourage web developers to place themselves in the user context, especially requiring accessibility.


So, here’s a challenge for those interested in Science of Design, design patterns, and test methods with many case studies on the web, discussed in blogs and podcasts.

Touring CS websites by screen reader
— download MP3


Are you up for something different? Download

MP3 illustration of POSH Computer Science websites 45 minutes, 20 MB
. This is me talking abot what I find at the following locations, pointing out good and bad accessibility features. You should get a feeling of life using a screen reader and how I stumble around websites. And, please, let me interject that we’re all learning to make websites better, including my own past and present.

Note: I meant POSH=”Plain old semantic HTML” but sometimes said “Plain old simple HTML”. Sorry about the ringing alarm. Experimental metadata: Windows XP, Firefox, NVDA RC 2009, ATT Mike and Neo speech Kate, PlexTalk Pocket recorder.

Web Sites Visited on CSE screen reader tour


  1. U. Texas Austin


    Comments:
    Firm accessibility statement;
    graphic description?;
    headings cover all links?;
    good to have RSS;
    pretty POSH


  2. U. Washington


    Comments:
    No headings, uses layout tables (deprecated);
    good use of ALT describing graphics;
    not POSH


  3. U. Arizona


    Comments:
    all headings at H1, huh?;
    non informative links ‘learn more’;
    not POSH


  4. CS at cmu.edu


    Comments:
    no headings;
    non informative graphics and links;
    unidentified calendar trap;
    definitely not POSH


  5. Computational Thinking Center at CMU


    Comments:
    no headings;
    strange term probes:;
    non informative links PPT, PDF;
    poor POSH


  6. CRA Computing Research Association


    Comments:

    no headings;
    interminable links unstructured list;
    not so POSH


  7. ACM.org and DL portal


    Comments:
    irregular headings on main page;
    no headings on DL portal;
    noninformative links to volumes;
    hard to find category section;
    poo POSH


  8. Computer Educators Oral History Project CHEOP


    Comments:
    straightforward headings;
    don’t need “looks good” if standard;
    good links;
    POSH enough


  9. NCWIT National Center Women Information Technology


    Comments:
    doesn’t conform to accessibility statement;
    graphics ALT are not informative;
    link ‘more’ lacks context;
    headings irregular;
    do over for POSH

So, what to do with these POSH reports?


Clearly, some sites could use some more work to become world class role models for accessibility. At first glance, my reports and those that would be compiled from validator’s like WebAim WAVE indicate that some HTML tweaking would yield improvements. Maybe, but most websites are under the control of IT or new media or other departments, or maybe outsourced to vendors. Changes would then require negotiation. Another complication is that once a renovation starts, it is all too easy to use the change for a much more extensive overhaul. Sometimes, fixes might not be so easy, as often is indicated by the processes of progressive enhancement. This is classical maintenance process management, as in software engineering.


However, hey, why not use this as a design contest? Which student group can produce a mockup shadow website that is attractive and also meets the WCAG, validator, and even the SLGer tests?


Just saying, here’s a great challenge for CSE to (1) learn more about accessibility and web standards, (2) make websites role models for other institutions, and (3) improve service for prospective students, parents, and benefactors.

conclusion: A Call To Action

To the proclamation, let us informally add

  • whereas society, including the Cs field itself, requires that all information, computer-based technology be available to all persons with disabilities,

  • whereas computer science is the closest academic field to the needs and opportunities for universal accessibility,


  • Disabled individuals are particularly under-represented in computing fields, in disparate proportion to the importance of disability in the economic and social well-being of the nation

  • therefore
  • computer science educators will adapt their curricula to produce students with professional awareness of the range of human abilities, the resources for responding to needs of persons with disabilities

  • computer science education will be open and welcoming to all persons with disabilities both helping the person to reach their own employment potential and opportunity to contribute to society and (2) inform educators and other students about their abilities, needs, domain knowledge,

See next post for Webliography

Comments, Corrections, Complaint?

Please add your comments below and I’ll moderate asap.
Yes, I know there are lots of typos but I’m tired of listening to myself, will proof-listen again later.
Longer comments to slger123@gamail.com. Join in the Twitter discussion of #accessibility by following me as slger123.


Thanks for listening.

Webliography for ‘Grafting Accessibility onto Computer Science Education’

December 7, 2009

References for ‘Grafting Accessibility Onto Computer Science’ Education

This webliography accompanies an article on <‘As Your World Changes; post on ‘Grafting Accessibility onto Computer Science Education’ Dec 7 2009 That article analyzes trends in Society, technology, and Science and suggests actions for exercises, projects, and debates suitable for traditional computer science courses. See also a recording of how CS web sites appear to a visually impaired person using a screen reader.
The article’s theme is the application of computational thinking to accessibility problems and techniques.

Computational Thinking


  1. Computational Thinking and Thinking About Computing, Jeannette wing, Royal Society


  2. Jon Udell Podcast Interview with Dr. Jeannette Wing on Computational Thinking


  3. Jon Udell Interview Podcast with Joan Peckham on NSF Computational Thinking activities


  4. Center for Computational Thinking Carnegie Mellon University

Accessibility Resources


  1. IEEE ‘Accessing the Future’ 09 Conference

    Recommendation 1: # In standards and universal design it is imperative that accessibility and the needs of people with disabilities are incorporated into the education of those who will generate future ICT.

  2. Assistive Tech and organization conferences and exhibits, e.g. CSUN Cal State North ridge accessibility conference(San Diego)

  3. User Centered Design Blog post on future of accessibility


  4. Project Possibility Open Source for Accessibility


  5. Knowbility Consulting, John Slatan Access U


  6. Business Week series on assistive technology


  7. Understanding Progressive Enhancement


  8. National Center on Technology Innovation brief on Assistive Technology

    Portability, customization, etc.


  9. Five Key Trends in Assistive Technology, NCIT summarized


  10. Webaim.org with guidelines, validator, NVDA testing, screen reader survey


  11. Opera’s MOMA Discovers What’s Under the Web Hood


  12. Hakob Nielsen AlertBox and Beyond ALT Report


  13. Podcast series on practical accessibility, see #74 ‘Back to Basics’


  14. Video on importance of HTML headings


  15. gov 2.0: Transparency without Accessibility? (FCW)


  16. Clifford Nass ‘Wire for speech’ book and experiments

Web Standards and Accessibility References


  1. STC Society of Technical Communicators Accessibility SIG


  2. WAI Web Accessibility Initiative of W3c


  3. WCAG 2.0 Web Content Accessibility Guidelines


  4. #Accessibility or #a11y tracks tweets using AccessibleTwitter


  5. The Web standards Mafia honored Nov. 30 Web standards day

    <


  6. Interact open web standards curriculum project


  7. Opera’s Web standards Curriculum


  8. Online book on Integrating Accessibility in design ‘Just Ask’


  9. How People with Disabilities use the Web

Computer Science Week and Policy Organization References

    <


  1. Computer Science Education Week


  2. Accessibility official statements of SIGCSE


  3. US ACM Policy on Web Accessibility

    with many useful links


  4. Dept. of Justice Office of Civil Rights on Web Accessibility in Higher Education


  5. Computing Research News on Accessibility Research (Ladner)


  6. ACM Special Interest group on Computing accessibility

Computer Science Education and Accessibility References

  1. ‘Accessibility First Approach to Teaching Web Design Hamilton College


  2. Web Design with Universal Usability (Schneiderman)


  3. Academia.edu people with speciality accessibility


  4. Web Education Survey


  5. Diversity Through Accessibility blog


  6. Improving Web Accessibility through Service Learning Partnerships


  7. Integrating usability and Accessibility in Information Systems Assurance


  8. Equal Access, Universal Design of Computing Departments


  9. AccessMonkey project at U. Washington


  10. An Accessibility Report Card for World Known Universities


  11. Introducing Accessibility in Internet Computing


  12. WebAnywhere reader from U. Washington


  13. Broadening Participation NSF


  14. Visually Impaired Students get a boose in Computing (RIT)


  15. Imagine IT Project at Rochester Institute of Technology
Service Organizations within Academia
References

  1. WebAIM on University Accessibility Policies


  2. Web Accessibility Center at The Ohio State University


  3. Designing More Accessible Websites — TRACE Center U. Wisconsin


  4. Best HTML Practices from ICTA Illinois Center for Web Accessibility


  5. Cultivating and Maintaining Accessibility Expertise in Higher Education


  6. Access IT National Center at U. Washington


  7. A Checklist for Making Computing Departments Inclusive, DOIT at U. Washington


  8. Distance Learning Accessibility Evaluation


  9. U. Texas Accessibility Center (RIP)


  10. Disability 411 Podcast for Disability Professionals

Services and Products for Visually Impaired


  1. Bookshare.org

    60,000+ digital talking books scanned by volunteers or contributed by publishers, available to all USA Special Ed students


  2. TextAloud reader and mp3 converter

    also source for commercial synthetic voices and a good newsletter on text to speech

    <li
    <
    Free, open source, international screen reader NVDA (non-visual desktop access)


  3. audio-driven PDA, RSS, newspaper and book reader
    from Levelstar.com

    >

  4. Disability.gov
  5. American Federation for Blind, Access World newsletter and product reviews

  6. American Council for Blind

  7. National Federation for Blind
  8. Access World Product reviews


    DAISY internationalism consortium on digital talking books standard

>

Podcasts on Assistive Tech and Persons with Disabilities


  1. Blind Cool Tech amateur product reviews

  2. Accessible World Tech Training

  3. ACB Radio news, demo, interviews


  4. WebAxe Podcast on Practical Accessibility

Notes and References on the ‘Curb Cuts’ principle

  1. ‘Universal Design’ paradigm (from Wikipedia) integrates concepts from physical, architectural, and information design.

  2. Detailed principles (from NCSU design center) include equitable use, flexibility, simplicity, intuitiveness, tolerance for error, low physical effort,…

  3. A chronology of inventions for electronic curb cuts illustrates how hearing, seeing, and learning disabilities have influenced the modern communications world.

  4. The ‘curb cut’ symbolism is widely used in the accessibility world, e.g. ‘curbcuts.net’, an accessibility consultancy

    . The site kindly provides a guide to concrete curb cuts


  5. Background on accessibility in the context of “curb cuts”
    covers the essential role of considering the full range of human abilities in design.

  6. Analysis of the “curb cut” metaphor in computing suggests many problems in its usage.

Relevant ‘As Your World Changes’ Posts


  1. AYWC ‘Using Things That Talk’ demonstration presentation


  2. AYWC Literacy Lost and Found (charts, reading)

  3. AYWC Amazon Kindle and accessibility: what a mess!


  4. AYWC stumbling around .gov websites: the good, bad, and goofy


  5. AYWC Are missing, muddled use cases the cause of inaccessibility?


  6. AYWC Images and their surrogates — the ALT tag


  7. AYWC Let’s all use our headings

Comments, Corrections, Complaint?

Please add your comments below and I’ll moderate asap.
Yes, I know there are lots of typos but I’m tired of listening to myself, will proof-listen again later.
Longer comments to slger123@gamail.com. Join in the Twitter discussion of #accessibility by following me as slger123.


Thanks for listening.

The Pleasures of Audio Reading

May 22, 2009

This post expands my response to an interesting
Reading in the Dark Survey
Sighted readers will learn from the survey how established services provide reading materials to be used with assistive technology. Vision Losers may find new tools and encouragement to maintain and expand their reading lives.

Survey Requesting feedback: thoughts on audio formats and personal reading styles?

Kestrell says:

… hoping to write an article on audio books and multiple literacies but, as far as I can find, there are no available sources discussing the topic of audio formats and literacy, let alone how such literacy may reflect a wide spectrum of reading preferences and personal styles.

Thus, I am hoping some of my friends who read audio format books will be willing to leave some comments here about their own reading of audio format books/podcasts. Feel free to post this in other places.

Some general questions:
Do you read audio format books?
Do you prefer special libraries or do you read more free or commercially-available audiobooks and podcasts?
What is your favorite device or devices for reading?
Do elements such as DRM and other security measures which dictate what device you can read on influence your choices?
Do you agree with David Rose–one of the few people who has written academic writings about audio formats and reading–that reading through listening is slower than reading visually?
How many audiobooks do you read in a week (this can include podcasts, etc.)?
Do you ever get the feeling form others that audiobooks and audio formats are still considered to be not quote real unquote books, or that reading audiobooks requires less literacy skills (in other words, do you feel there is a cultural prejudice toward reading audiobooks)?
anything else you want to say about reading through listening?

This Vision Loser’s Response

Audio formats and services


I read almost exclusively using TTS on mobile readers from DAISY format books and newspapers. I find synthetic speech more flexible and faster than narrated content. For me, human narrators are more distracting than listening “through” the voice into the author’s words. I also liberally bookmark points I can re-read by sentence, paragraph, or page.


Bookshare is my primary source of books and newspapers downloaded onto the Levelstar Icon PDA. I usually transfer books to the APH BookPort and PlexTalk Pocket for reading in bed and on the go, respectively. My news streams are expanded with dozens of RSS feeds of blogs, articles, and podcasts from news, magazines, organizations, and individuals. Recently, twitter supplies a steady stream of links to worthy and interesting articles, followed on either the Icon or browser in Accessible Twitter.

I never seem to follow through with NLS or Audible or other services with DRM and setups. I find the Bookshare DRM just right and respect it fully but could not imagine paying for an electronic book I could not pass on to others. I’m about to try Overdrive at my local library. I’ve been lax about signing up for NLS now that Icon provides download. No excuses, I should diversify my services.


I try to repay authors of shared scanned books with referrals to book clubs and friends, e.g. I’ve several now hooked on Winspear’s “Macy Dobbs” series.

Reading quality and quantity

I belong to two book clubs that meet monthly as well as taking lifelong learning classes at the community college. Book club members know that my ready book supply is limited and take this into consideration when selecting books. My compact with myself is that I buy selected books not on Bookshare and scan and submit them. I hope to catch up submitted already scanned books soon. Conversely, I can often preview a book before selection and make recommendations on topics that interest book club members, e.g. Jill B. Taylor’s “Stroke of Insight”. I often annoy an avid reader friend by finishing a book while she is #40 on the local library waiting list. This happens with NYTimes best sellers and Diane Rehm show reader reviews. No, I don’t feel askance looks from other readers but rather the normal responses to an aging female geek.


At any one time, I usually have a dozen books “open” on the Bookport and PlexTalk as I switch among club and course selections, fiction favorites, and heavy nonfiction. However, I usually finish 2 or 3 books a week, reading at night, with another 120 RSS feeds incoming dozens of articles daily. I believe my reading productivity is higher than before vision loss due to expedient technology delivery of content and my natural habits of skimming and reading nonlinearly. Indeed, reading by listening forces focus and concentration in a good sense and, even better, performed in just about any physical setting, posture, or other ambient conditions.
Overall, I am exquisitely satisfied with my reading by listening mode. I have more content, better affordable devices, and breadth of stimulating interests to forge a suitable reading life.

Reading wishes and wants


I do have several frustrations. (1) Books with tables of data lose me as a jumble of numbers unless the text describes the data profile. (2) While I have great access through Bookshare and NFB NewsLine to national newspapers and magazines, my state and local papers use content management systems difficult to read either online or by RSS feed. (3) Google Book Search refuses to equalize my research with others by displaying only images of pages.


For demographics, I’m 66 years old, lost last sliver of reading vision three years ago from myopic degeneration, and was only struggling a few months before settling into Bookshare. As a technologist first exposed to DECTalk in the 1980s, I appreciate TTS as a fantastically under-rated technology. However, others of my generation often respond with what I’ve dubbed “Synthetic voice shock” that scares them away from my reading devices and sources. I’d like to see more gentle introductions from AT vendors and the few rehab services available to retired vision losers. Finally, it would be great to totally obliterate the line between assistive and mainstream technology to expand the market and also enable sighted people to read as well as some of us.

References and Notes on Audio Reading

  1. Relevant previous posts from ‘As Your World Changes’

  2. Audio reading technology
    • LevelStar Icon Mobile Manager and Docking Station is my day-long companion for mail, RSS, twitter, and news. The link to Bookshare Newsstand and book collection sold me on the device. Bookshare can be searched by title, author, or recent additions, and I even hit my 100 limit last month. Newspapers download rapidly and are easy to read — get them before the industry collapses. The book shelf manager and reader are adequate but I prefer to upload in batches to the PC then download to Bookport. The Icon is my main RSS client for over 100 feeds of news, blogs, and podcasts.
    • Sadly, the American Printing House for the Blind is no longer able to maintain or distribute the Bookport due to manufacturing problems. However, some units are still around at blindness used equipment sites. The voice is snappy and it’s easy to browse through pages and leave simple bookmarks. Here is where I have probably dozens of DAISY files started, like a huge pile of books opened and waiting for my return. My biggest problem with this little black box is that my pet dog snags the ear buds as his toy. No other reader comes close to the comfort and joy of the Bookport, which awaits a successor at APH.
    • Demo of PlexTalk Pocket provides a TTS reader in a very small and comfortable package. However, this new product breaks on some books and is awkward managing files. The recording capabilities are awesome, providing great recording directly from a computer and voice memos. With a large SD card, this is also a good accessible MP3 player for podcasts.
  3. Article supporting Writers’ Guild in Kindle dispute illustrates the issues of copyright and author compensation. I personally would favor a micro payment system rather than my personal referral activism. However, in a society where a visually impaired person can be denied health insurance, where 70% unemployment is common, where web site accessibility is routinely ignored, it’s wonderful that readers have opportunities for both pleasure and keeping up with fellow book worshipers.
  4. Setting up podcast, blog, and news feeds is tricky sometimes and tedious. Here is my my OPML feeds for importing into other RSS readers or editing in a NotePad.

  5. Here’s another technology question. Could DAISY standard format, well supported in our assistive reading devices become a format suitable for distributing the promised data from recovery.gov?
    Here is a interview with DAISY founder George Kerscher on XML progress.

  6. Another physiological question is what’s going on in my brain as I switch primarily to audio mode? Are there exercises that can make that switch over more comfortable and accelerated than just picking up devices and training oneself? I’m delving into Blogs on ‘brain plasticity’
  7. (WARNING PDF) Listening to the Literacy Events of a Blind Reader – an essay by Mark Willis asks whether audio reading can cope with the critical thinking required in a complex and sometimes self-contradictory doctrine like Thomas Kuhn’s “Scientific Revolutions”. This would be a great experiment for psychology or self. Let’s also not forget the resources of Book Club Reading Lists to help determine what we missed in a reading or may have gained through audio mental processing.

Audio reading of this blog post

The ‘Talking ATM’ Is My Invisible Dream Machine.

April 30, 2009

A twitter message alerted me to a milestone I surely didn’t care about a decade ago, but really appreciate now. This post explains how easy it is to use a Talking ATM. People with vision impairment might want to try out this hard-won disability service if not already users. Sighted people can gain insight and direct experience with the convenience of talking interfaces. But, hey, why shouldn’t every device talk like this?

The Milestone: 10 years of the Talking ATM

The history is well told in commemorative articles published in 2003. References below.
Pressure from blind individuals and advocacy organizations circa 2000, with the help of structured negotiators (lawyers), led banks to design and roll out Automated Teller Machines equipped with speech. Recorded audio wav files were replaced by synthetic voices that read instructions and lead the customer through a menu of transactions.

first, I’ll relate my experience and then extrapolate on broader technology and social issues.

My Talking ATM Story


As my vision slid away in 2006, I could no longer translate the wobbly lines and button labels on my ATM screen to comfortably perform routine cash withdrawals. Indeed, on one fateful Sunday afternoon I inserted my card, then noticed an unfamiliar pattern on the screen. Calling in my teenage driver, we noticed several handwritten notes indicating lost cards in the past hour. I had just enough cash in hand to make it through a Monday trip out of town, and immediately called the bank upon return Tuesday. A series of frustrating interactions ensued, like my ATM card could only be replaced by my coming in to enter a new PIN. But how was I to get to the office without a driver or cab fare when I was out of cash?


This seemed like a good time to familiarize myself with audio ATM functions, to lessen risk of having another card gobbled by a temporarily malfunctioning station. With lingering bad feelings about the branch of the Sunday fiasco, I recalled better experience at a different office after my six month saga on reversal of mortgage over-payment. Lesson learned—never put an extra 0 in a $ box and always listen or look carefully at verification totals.


I strolled into the quiet office and asked customer service to explain the audio teller operations. The pleasant service person whipped out a big headset and we headed out to the ATM station. Oddly, most stations are located in office alcoves or external walls. This one was outside the drive-by window to be shared by pedestrian and automotive customers.
ok, waiting for traffic to clear, we went through a good intro. I wasn’t as familiar with audio interfaces at that point in my Vision Loser life but I eventually worked up courage in the next few weeks to tackle the ATM myself with my own ear buds.


Well, 3 years later, I’m a pro and can get my fast cash in under a minute, unless my ear buds get tangled or I drop my cane. First problem is figuring out how to get in line, like standing behind a truck’s exhaust or walking out before a monster SUV. Usually I hang back, looking into the often dry bed of Granite Creek until the line is empty. Next step is to stand my white cane in a corner of the ATM column, feel around for the audio opening hidden in a ridged region, wait for the voice to indicate the station is live, shove in my card, and ready to roll. The voice, probably Eloquence, usually drones into a “Please listen carefully as the instructions have changed…”. Shut up, this will only take a minute and I don’t need to change volume or speed. Enter, type PIN, retype PIN if commonly hit a wrong key, and on to Main Menu (thinking of ACB Radio’s Technology jingle). 6 button down to Fast Cash, on by 20,…100,…, confirm and click, chug comes cash, receipt, and release of card. Gather up receipt, card, cane, and — important — remove ear buds, and I’m on my way.


Occasionally things go wrong. Recently, my receipt didn’t appear and customer service rep and I did a balance request and out spat two receipts, both mine. Kind of nerve wracking as somebody else could have intervened and learned of my great wealth. The customer service rep vowed to call in maintenance on the ATM, but I bet a few more receipts got wadded up that afternoon. Electro-mechanical failures often foil sophisticated software.


Another time, I finished my Fast Cash and waited for card release only to be given a “have we got a good deal for you” long-winded offer of a credit card. I wasn’t sure how to cancel out and still get my ATM card back. since I lecture family on the evils of the credit card, I was fuming at a double punishment. Complaining to the customer service rep inside, I learned sighted people were also not thrilled at this extra imposed step.


Now, to reveal the identity of the ATM, it’s Chase Bank, formerly Bank One, on Gurley Street near the historic Whisky Row of downtown Prescott AZ.
Although I haven’t performed any complex ATM interactions, it’s fair to say I’m a satisfied user and would not hesitate to recommend this to anyone with good hearing unafraid to perform transactions with engines and radios and cell conversations roaring all around. An indoor ATM would be a good step someday but, hey, this is a conservative town, not particularly pedestrian friendly. Mainly I appreciate that I can get my cash as part of a routine just like other people and I don’t even use up extra gasoline waiting in line.

Broader Issues of Talking Transactions

Does the ATM voice induce Synthetic Voice Shock?

I coined the term in Synthetic Voice Shock Reverberates Across the Divides to explain responses I heard about voices offered in assistive technologies to overcome vision loss. Personally, I hated Eloquence when I first heard it demonstrated but I rapidly grew to love my Precise Paul and friends as I realized that (1) the voices really were understandable and (2) I didn’t have any choice if I wanted to keep reading. I now wonder how people like me, slowly losing vision while off the rehab grid, learn about Talking ATM and related services. It hurts to think people give up that one step of independence from not knowing whom to ask or even if such services exist. And supposing someone does step up to an ATM ready to listen, are they tuned in to hearing synthetic speech sufficiently to make an informed choice whether the Talking Teller is an appropriate service for them? Did the Disability Rights movement fight through a decade only to have a generation of drop-outs from oldsters with difficulty adjusting to vision loss, a panoply of technology, and no-longer-young nerves?

Are Audio E-voting and Talking ATM’s Close Cousins?

I have described my experiences in 2008 voting without viewing. The voting device is a keypad like offered by the ATM I use while the voice is a combination of human narrated candidate and race announcements interspersed with synthetic speech instructions and navigation. I found this mode of voting satisfying, compared with having someone read the ballot to and mark for me. However, even my well-attuned ears and fingers seemed to get in trouble with speech speedup and slowdown, which I blame on poor interaction design. Note that many ATM and voting systems have origins in the NCR and Die bold product lines so usability and accessibility research lessons should carry over.

Why aren’t all check-out services as easy as banking?


I buy something at a store and then have a hassle at check-out finding a box on a screen or buttons I cannot see for typing in a debit card PIN. I’ve never understood why I can give a credit card number over a phone without signing but must sign if I swipe it on checkout. And giving a PIN to a family member or stranger isn’t good practice. Sometimes check-out can get really nasty as when a checker wouldn’t let me through because my debit card swiper was only age 20 – it’s my debit card, my groceries, my wine, and I’ll show you a social security age ID card. Geez, now we’re nervous every time we check out a Safeway since Aunt Susan has a short fuse after a tiring shopping session. If only the Point of sale thing talked and had tactile forms of PIN entry. I ask Safeway when accessible check-out will be possible and let them know the store has a visually impaired regular shopper.

Is audio interaction a literacy issue?


We are actually on track to a world where everything talks: microwave ovens, cards, color tellers, security systems, thermostats, etc. Text to speech is a commodity additional feature to onboard processors in digital devices. Indeed, we can hope this feature slips out of the aura of assistive technology into the main stream to enlarge the range of products and capabilities available to everybody. Why shouldn’t manuals be built in to the device, especially since the device is soon after purchase separated forever from its printed material? Why shouldn’t diagnostics be integrated with speech rather than provided on bitty screens hard to read for everybody? How about making screens the add-on features with audio as the main output channel?


Let’s generalize here and suggest the need for a simple training module to help people with recent vision loss get accustomed to working keypads accompanied by synthetic speech. Who could offer such training? I asked around at the CSUN exhibits and haven’t yet found an answer. There are multiple stages here, like producing a book and then distributing to end users via libraries or rehab services. My experience is that social services are hard enough to find and often more available to people who have already suspended independent activities.


The outreach problem is real. Finally, I’d like to express my appreciation to the activists, educators, and lawyers who convinced banking organizations and continue to work on retailers to make my “money moments” conventional and un stressful. The “talking ATM” shows what is possible not only for business but also for the broader opportunities sketched out above. Let all devices talk, I wish.

References on Talking ATMs

  1. Background and excellent overview compiled by Disability Civil Rights Attorney Lainey Feingold>

  2. Blind Cool Tech demos of talking devices

  3. Talking ATM on wikipedia

  4. Swedish choice of Acapella voices for ATMs for more modern sounding speech. Demos available on website.


  5. Chase bank and Access Technologies ATM collaboration


  6. (PDF) 2003 case study of Talking ATM upgrades
    . Bundled features with speech included better encryption and streamlined statement viewing.


  7. The electronic ‘curb cuts’ effect
    by Steve Jacobs


  8. Portfolio of talking information
    based on ATT technology

  9. ‘What to do when you meet a sighted person’ (parody)

Accessible Voting Worked for Me, I Think

October 31, 2008

It was a fine warm fall day for voting with an overhang of smoke from controlled burns in nearby forests.

After an earlier trial demo and a mixed experience in the September primary, I felt geared up for the mechanics of voting independently in this penultimate election of my lifetime. Ending a year of political junkiness and some serious conversations with “Jack the Dog Walker” on state ballot initiatives, I knew my choices.

Then I spoke those words that so shake up the poll workers at the Yavapai County early voting office — “I need Audio voting”. With white cane for identity, I waited patiently while the exceptional procedures sprung into action. Given head phones and number key pad and a chair, the poll worker returned my ID and inserted the card to rev up the premiere Election Systems workstation. Ominously, the audio did not work. Reset. Whoops, audio but no keypad response. Move over one workstation and I was finally in business with instructions coming through the head phones and my brain fighting to cancel out the surrounding noise of the other voters in the office lobby alcove.

I was truly awe struck at the announcement of the office Presidential Electors, forgetting momentarily the key to press to actually cast this important vote. Then I got into the rhythm – 6 for next, 5 to vote, 4 for back. This ballot’s interaction was easier than the primary which required more confirmation and interaction to move among races. Each race and contestants or YES/NO answers were clearly announced. However, a 7 to cancel a vote also slowed the voice, in contrast to the disconcerting speech speedup I experienced in September. This round I understood the sample ballot and could predict how far to go. Reading the ballot for confirmation, a 9 key pressed, the clatter of the printer and I was done. I thanked the poll worker for competantly handling this exceptional Vision Loser.

Whether my vote is actually counted accurately is a whole different matter, something the U.S. must fix if it cares for democracy as much as for marketplace ideology. Exhilarated from my independent action, I trekked on down town for lunch near the famous Prescott Territorial Court House. Now, about those accessible street crossing signals — well, “adopt an intersection” is next of the agenda of this Vision Loser Voter.

Uh, oh, just when I thought I was safe from campaigning, comes a warning about Monday night scenic opportunity using the Court House Plaza prop. Sigh…

Previous Posts:

voting Without Viewing? Yes, but It’s so Slow!

August 20, 2008

Taking advantage of accessible voting


I decided that since the Help America vote act had encumbered quite a view million $$$ for fancy electronic equipment with accessible extension, I would take my chances to vote as independently as possible this round. Here’s the story of early voting in an Arizona primary. Vision Losers might use this experience to evaluate their own voting options. Other citizens and technologists will learn how electronic voting works for one tech savvy Vision Loser.

Against a background of the sorry state of American voting processes


First, let me say that, as an informed computer scientist, I do not for one nanosecond believe the odds are very high that my voting precinct actually got a correct tally of votes, including mine. I voted on a setup from the infamous Diebold, now renamed to Premiere Election, Systems. There’s just no way any independent assurance organization can reasonably test a black box version of software and hardware, let alone all the combinations of diverse local ballot designs multiple configurations of the setup, and inevitable versions of evolving software. And that’s not worrying about human error by voting board personnel, malicious people, or silly policies like Ohio’s sleep-over procedures. Business ideology has trumped common sense democracy for Americans, unlike Australia and other countries that adopt an open approach.

Here is how I voted in September 2008

A preview and trial at my local voting board


Nevertheless, I wanted my independence and to force myself through the best possible preparation. A few months ago, I paid a visit to the Yavapai county recorder’s Office for a personal trial on a mock ballot so I would be familiar with the equipment. I was reasonably impressed with the audio system, very enthusiastic about the personnel who welcomed the opportunity to try out their audio setup, and comfortable about working the equipment rather than asking someone to read and mark my ballot. I knew the actual voting would be slow and that I needed to do my homework on candidates and races so I could concentrate on the voting act itself.

Getting from sample to real ballot


I was pleased to find a nice little primary coming up in September with early voting several weeks ahead. One primary race is especially important in Arizona district No. 1, to replace rep. Rick Renzi who was indicted on 35 counts of fraud and other bad stuff. With a senator as presumptive Presidential candidate and a 40% voting record, Poor representation of this region for months especially annoys me as economic and social policies have consequences I had not foreseen as I grapple with my own rehabilitation and my family’s future. Both major parties had a good slate of 4 or 5 candidates with experience relative to a highly diverse region of Indian reservations, small cities, and lots of open space.


I made my choice of party and candidate for Congress and began to look for the other races of interest. There were few contests so I assumed the ballot would be a piece of cake. Actually, I had some trouble figuring out the full set of races. I used VoteSmart, the AZ clean elections site, the county listing of candidates, Arizona Republic and Daily Courier candidate blurbs, even Wikipedia. A sample ballot arrived just before my trip to the polling place, but my reader and I were confused about a long list of write-in lines.

The nitty-gritty mechanics of voting


So, as much prepared as I could be, I entered the county office lobby and asked to vote using the audio system. I think I was the first to request this as a flurry of calls upstairs quickly produced an access card to a screen protected by side blinders and the headset and keypad I had used in my previous experiment. Oh, and most important was a chair.


To summarize the audio voting process, you click the appropriate numbered buttons to advance through races, making and confirming choices while hearing the race titles, constraints and candidate names through headphones. There is nothing visual happening. I listened to the instructions and tried to adjust the volume to match both a synthetic voice announcement of races and human recorded reading of candidate names, using female voices. Occasionally, other customers and voters in a noisy lobby overcame the headset ear pads. The input device was a simple phone keypad with larger sized keys, comfortably held in my lap.

Uh, oh, am I in a loop?


I moved quickly through my choice for the congressional and legislative races. Then things became unfamiliar with more races for county offices and state supreme Court seats all with only a write-in option. Not having any choice, I kept hitting 6 to next race, 9 to confirm my under-voting for continuation to next race. At one point , my attention drifted and I seemed to be in a loop of hitting next without actually having races announce, maybe between district, county, and state races.


After a while I got bored and tried an actual write-in, “gump” sounded good at the moment, and was easy to type although tedious to spell and confirm. Then I got serious and canceled out of write-in. In successive races for supreme court seats, the synthetic voice seemed to be getting faster, and very high pitched. Now, I can listen to really fast voices on my reading appliances. But by the end of what seemed like 50 races, I couldn’t understand the voice. Nor could I remember how to get the main menu or adjust voices. I was stuck, hoping the end would come before I fell asleep at the keypad. Finally, the printer attached to the side clattered and the voice trailed off into oblivion. My nearly trance state lifted and I called for the attendant to complete the session.


Had I actually accomplished my voting goals? I think so as the early races that mattered seemed to be OK, but since I lost control in the middle and was pretty confused toward the end, I can only hope nothing invalidated those early race clicks. This whole process took about 30 minutes, long enough I had to wake up my driver to leave . I reported my troubles to the poll assistants but left unsure we understood the cause of my loop and voice speed-up. My guess is that the speed up started when I hit the relevant key during my write-in fumbling and the modes got confused as I skipped through further write-in choices.

Yes, I will vote this way again, but can others?


I had hoped this experience could be recommended to others, but, alas, I fear those less adept at computer interactions might not find the humor in the loop and could freak out with babbling voices. I will vote again this way in November but next time pay lots more attention to the exit, speed, and volume options. Everybody has a limit to attention and energy to put into this voting exercise. Half an hour for a handful of races and an enormous number of later vacuous choices is a dubious way of getting the job done.

Further concerns about time commitments, voice shocks, and practice


Another lesson for next time is to seriously invest more effort into learning about picking candidates. I hope to find more help from the SunSounds state audio assistance radio system or locate better candidate description materials. For example, the AZ Clean elections brochure that arrived in the mail was organized by race, then district, then party, then candidate which was beyond my patience to scan or anybody else’s willingness to read to me only the district No. 1 choices on pages 4, 39, and so on. Perhaps voting early beats preparation of more candidate comparisons and recommendations from organizations like league of women voters. Perhaps my “domain knowledge” of elections and state offices made my Google and dog pile searches susceptible to donate Now organizations. Certainly, I have not yet found a good source of advice directed to people like me voting blind for the first time. What I really want is a web page duplicating the ballot, divided into levels of government, with attached very short bios and links to longer histories, position statements, and reputable sources of candidate comparisons. The HTML and hypertext structuring are important as PDF is hard to use by audio and often loses the content structure when converted to a text stream. It might also be nice to have a candidate-a-day RSS feed to make the information more digestible in smaller chunks.


I would recommend to others considering using an audio or visually assisted voting workstation to request a trial. Yes, that means taking up time from election board workers, but I found them helpful, friendly, and interested in feedback. Anybody who can handle a bank ATM via audio should be ready to try out the system. However, someone with hearing problems might not be able to adjust the equipment to their needs in a noisy environment. The long-time blind who readily adapt to new devices should appreciate the new-found independence. However, new Vision Losers are faced with lot of work to master both the information gathering and the audio assisted voting process.


My biggest warning is the time commitment to survive the rigors of a long ballot. Had I wanted to actually write in a lot of names, I would have been there until closing time. With so few voters like me, there seems little data to accumulate experience for a warning label, but this is a practical constraint. Voters need to know how much time to ask of their drivers. With more voters using the assistive workstation, there would be a long wait just to get your chance. I suppose I could have asked for assistance during my loops and voice accelerations, but I just wanted to get out of write-in hell. Far more instructional time could be required for first time users of the audio assistance, especially if the equipment balks at start up or printing. And, what happens if a voter gives up during a voting session or nearly goes into a trance, as happened to me? Of course, there are other disabilities more complex than vision, such as strength and mobility, for using different input devices.


Getting a bit more technical, in my earlier visit for a trial, we discussed the need for a simulator for voter training using the audible equipment. I’d appreciate knowing if this exists anywhere. Since the user interaction is by phone keypad, a simulator with a mock ballot, as in my trial, could service widespread people if they knew the voting system designated for them. This could be done by phone or be a downloaded or web 2.0 app, something even I could write if I knew the rules. I could have called up and learned the instructions in the quiet of my home, memorized my way out when I hit a snag, and also reported problems back to the ballot designers and equipment vendors. Had I known about the write-in race survivor test, I’m not sure I would have followed through an actual vote. Those suffering from synthetic voice shock could at least determine whether they wanted to try to and were able to interpret the race announcements and instructions.


While the overall interaction of voting with only audio is really pretty easy, clearly the keypad needs a separate HELP key and RESTORE DEFAULTS action. Maybe these were available, but I was so deep into figuring out how to reach the end of the ballot, I was not interested in finding the escape button. More seriously, as a software testing expert and veteran system breaker, I really would like to replicate my experiences with the next-race loop and accelerating voice problems. It would be too irreverent and silly for a 65 year old lady to whiz around a county office building crowing that I’d broken the system, lookee, the computer is in a really bad state. No, I really appreciated the professionalism and help of the voting staff, but, well, I think I did break something and wish it could be reported and corrected.


So, why don’t I, a formerly reputable software professional try to do more? Well, first, with only two years of legal blindness I am still a learner in the assistive technology world. But more seriously, getting on my high horse, this whole system is an affront to U.S. citizenry. In my previous post, I equated electronic voting with two mixed metaphors, a “moon shot for democracy” and “extreme voting”, like a sporting challenge.

A rant on eVoting as a ‘bungled moon shot’


Just as sputnik shocked the U.S. into action for education in science, just as a catastrophe on the moon in 1969 would have undermined U.S. Self-confidence, just as the later space shuttles failures signaled a decline in space travel prowess, a definitive failure in our voting system undermines our feeling of living in a democracy. Yet, there is every sign that our voting system continues to be bungled, in the names of fancier technology and free enterprise. In my mind, the quest for a technological solution is a doable, long term project but only if committed to the technologists with expertise and freedom to question the safety of every step in the process, test each component down to its core against its specifications, simulate to exhaustion, and finally rely on combined community acceptance of safety to launch. In many ways, a rocket system is easier to design because it works with and against the continuous laws of physics, whereas a voting system works on discrete math and with and against the laws of human capabilities and differences. The security quality of human interactions with system is another dimension of complexity, but the bottom line is that voting systems cannot be black box. Discrete systems must be subjected to inductive reasoning applied to the code, hardware, user scenarios, with a huge dose of version control. Experimental software engineering has established the efficacy of software inspection, especially performed early and often using multiple viewpoints from varieties of expertise. Asking a weak testing regime to accept the assurance of vendors of proprietary systems, even against clear signs of fallibility, is like delivering a rocket to the pad, asking the astronauts to jump on, and not telling mission control how the rocket will behave.


My other metaphor of extreme voting is based on both user and developer experience. it is a lot to ask voting equipment vendors to produce extensions to service all ranges of human differences, including those considered disabilities. I was amazed the keypad and audio system worked as well as it did. Indeed, I might ask why spend all that money on fancy visual interfaces when audio will do, except for hearing impaired people. Users like me are forced into extreme and unknown conditions like long ballots read by unfamiliar voices marked by never before touched keypads. Please accept my invitation to use a bank ATM by audio to get a feeling for this experience. My current ATM transaction time is about a minute by knowing the exact sequence of key clicks, but at first I had little idea of the menu structures or the confirmation, cancellation, and selection instructions held in mind. Voting by audio is a similar experience.


To sum up, even though I had prepared myself well, I fell into a mess of write-in races which cause me to either mishandle the keypad input or to find an actual flaw in the system. In either case, the unpredictability of the long ballot and time required to work through it present, not insurmountable, but discomfiting conditions of voting independently. But I survived, and will continue to vote this way in the big election in November. I will also work hard in perhaps better information conditions to identify the races and candidates where I really care about my vote. I certainly do not want to leave wondering if I have voted for the right guy.

References for Voting without Vision

  1. Previous post on extreme Voting and a Moon Shot for Democracy
  2. California Secretary of State appraisal of voting system security and accessibility
  3. Concerns of computer scientists about electronic voting systems
  4. Audio version of this post

Synthetic Voice Shock Reverberates Across the Divides!

July 30, 2008

Synthetic Voice Shock — oh, those awful voices!


As I communicate with other persons with progressive vision loss, I often sense a quite negative reaction to synthetic, or so-called ‘robotic’, voices that enable reading digital materials and interfacing with computers. Indeed, that’s how I felt a few years ago. Let’s call this reaction "synthetic voice shock" as in:

  • I cannot understand that voice!!!
  • The voice is so inhuman, inexpressive, robotic, unpleasant!
  • How could I possibly benefit from using anything that hard to listen to?
  • If that’s how the blind read, I am definitely not ready to take that step.

Conversely, those long experienced with screen readers and reading appliances may be surprised at these adverse reactions to the text-to-speech technology they listen to many hours a day. They know the clear benefits of such voices, rarely experience difficult understandability, exploit voice regularity and adjustability, and innovate better ways of "living big" in the sighted world, to quote the LevelStar motto.

The ‘Synthetic Speech’ divide


Synthetic voice reactions appear to criss-cross many so-called divides: digital, generational, disability, and developer. The free WebAnywhere is the latest example with a robotic voice that must be overcome in order to gain the possible benefits of its wide dissemination. Other examples are talking ATM centers and accessible audio for voting machines. The NVDA installation and default voice can repel even sighted individuals who could benefit from a free screen reader as a web page accessibility checker or a way to learn about the audio assistive mode. Bookshare illustrates book reading potential by a robotic, rather than natural, voice. Developers of these tools seen the synthetic voice as a means to gain the benefits of their tools while users not accustomed to speech-enabled hardware and software run the other way at the unfriendliness and additional stress of learning an auditory rather than visual sensory practice.


This is especially unfortunate when people losing vision may turn to magnifiers that can only improve spot reading, when extra hours and energy are spent twiddling fonts then working line by line through displayed text, when mobile devices are not explored, when pleasures of book reading and quality of information from news are reduced.

Addressing Synthetic Voice Shock


I would like to turn this posting into messages directed at developers, Vision Losers, caretakers, and rehab personnel.

To Vision Losers who could benefit sooner or later

Please be patient and separate voice quality from reading opportunities when you evaluate potential assistive technology.


The robotic voice you encounter with screen readers is used because it is fast and flexible and widely accepted by the blind community. But there do exist better natural voices that can be used for reading books, news, and much more. While these voices seem initially offensive, synthetic voices are actually one of the great wonders of technology by opening the audio world to the blind and gradually becoming common in telephony and help desks.


As one with Myopic Macular Degeneration forced to break away from visual dependency and embrace audio information, I testify it takes a little patience and self-training and then you hear past these voices and your brain naturally absorbs the underlying content. Of course, desperation from print disability is a great motivator! Once overcoming the resistance to synthetic voices, a whole new world of spoken content becomes available using innovative devices sold primarily to younger generations of educated blind persons. Freed of the struggle to read and write using defective eyesight, there is enormous power to absorb an unbelievable amount of high quality materials. As a technologist myself, I made this passage quickly and really enjoyed the learning challenge, which has made me into an evangelist for the audio world of assistive technology.


If you have low vision training available, ask about learning to listen through synthetic speech. For the rest of our networked lives, synthetic voices may be as important as eccentric viewing and using contrast to manage objects.


So, when you encounter one of these voices, maybe think of them as another rite of passage to remain fully engaged with the world. Also, please consider how we can help others with partial sight. With innovations from web anywhere and free screen readers, like NVDA, there could be many more low cost speaking devices available world wide.

To Those developing reading tools with Text-to-Speech

>


Do not expect that all users of your technology will be converts from within the visually impaired communities familiar with TTS. Provide a voice tuned in pitch and speed and simplicity for starters to achieve the necessary intelligibility and sufficient pleasantness. Suggest that better voices are also available and show how to achieve their use.


It’s tough to spent development effort on such a mundane matter as the voice, but technology adoption lessons show that it only takes a small bit of discouragement to ruin a user’s experience and send a tool they could really use straight into their recycle bin. Demos and warnings could be added to specifically address Synthetic Voice Shock and show off the awesome benefits to be gained. The choice of a freely available voice is a perfectly rational design decision but may indicate a lack of sensitivity to the needs of those newly losing vision forced to learn not only the mechanics of a tool but also how to lis en to this foreign speech.

To Sighted persons helping Vision Losers

>
You should be tech savvy enough to separate out the voice interface from the core of the tool you might be evaluating for a family member or demonstration. Remember the recipient of the installed software will be facing both synthetic voice shock and possibly dependency on the tool as well as long learning curve. Somehow, you need to make the argument that the voice is a help not a hindrance. Of course, you need to be able to understand the voice yourself, perhaps translate its idiosyncrasies, and tune its pitch and speed. A synthetic voice is a killer software parameter.


You may need to seek out better speech options, even outlay a few bucks to upgrade to premium voices or a low cost tool. Amortizing $100 for voice interface over the lifetime hours of listening to valuable materials, maintaining an independent life style, and expanding communication makes voices such a great bargain.


And, who knows, many of the voice-enabled apps may help your own time shifting, multi-tasking, mobile life styles.

To Rehab Trainers

From the meager amount of rehab available to me, the issue of Synthetic Voice Shock is not addressed at all. Eccentric viewing, the principles of contrast for managing objects, a host of useful independent living gadgets, font choices, etc. are traditional modules in standard rehab programs. Perhaps it would be good to have a simple lesson listening to pleasant natural voices combined with more rough menu readers just to show it can be done. Listening to synthetic voices should not be treated like torture but rather like a rite of passage to gain the benefits brought by assistive technology vendors and already widely accepted in the visually impaired communities. Indeed, inability to conquer Synthetic Voice Shock might be considered a disability in itself.


As I have personally experienced, it must be especially difficult to handle Vision Losers with constantly changing eyesight and a mixed bag of residual abilities. It could be very difficult to tell Vision Losers they might fare better reading like a totally blind person. But when it comes to computer technology, that step into the audio world can both reduce stress of struggling to see poorly in a world geared toward hyperactive visually oriented youngsters, especially when print disability opens the flow of quality reading materials, often ahead of the technology curve for sighted people.


The most useful training I can imagine is a session reading an article from AARP or sports Illustrated or New York times editorial copied into a version of TextAloud, or similar application, with premium voices. Close those eyes and just relax and listen and imagine doing that anywhere, in any bodily position, with a daily routine of desirable reading materials. To demonstrate the screen reader aspect, the much maligned Microsoft sam in Narrator can quickly show how menus, windows, and file lists can be traversed by reading and key strokes. The takeaway of such a session should be that there are other, perhaps eventually better, ways of reading print materials and interacting with computers than struggling with deteriorating vision, assuming hearing is sufficient.

So, let us pay attention to Voice Shock


In summary, more attention should be paid to the pattern of adverse reactions of Vision Losers unfamiliar with the benefits of the synthetic speech interaction that enables so many assistive tools and interfaces.

References on Synthetic Voice Shock

  1. Wikipedia on Synthetic Speech. Technical and historical, back to 1939 Worlds Fair.
  2. Wired for Speech, research and book by Clifford Nass. Experiments with effects of gender, ethnicity, personality in perception of synthetic speech.
  3. Audio demonstrations using synthetic speech
  4. NosillaCast podcaster Allison Sheridan interviewing her macular degenerate mother on her new reading device. Everyzing is a general search engine for audio, as in podcasts.
  5. Example of a blog with natural synthetic speech reading. Warning: Political!
  6. Google for ‘systhetic voice online demo’ for examples across the synthetic voice marketplace. Most will download as WAY files.
  7. The following products illustrate Synthetic Voice Shock.
  8. Podcast Interview with ‘As Your World Changes’ blog author covering many issues of audio assistive technology
  9. Audio reading of this posting in male and female voices

Listen up! Technology, Materials, and strategy for non-Visual Reading

June 22, 2008

My adoption of the audio reading mode

This post describes how this vision Loser reads on a daily basis. sighted readers of this blog should gain some insight into alternative ways technology delivers what you read visually on printed pages or screens. Those now in transition with vision loss can get a snapshot of a specific combination of reading technology, web delivery systems, and kinds of reading materials.


I consider myself an effective reader at this point in my vision loss. Three years ago I would have had no way of describing how I would be reading now. Partially, this was from the inability to know how my sensory apparatus would be working. For the record, I see pages where the text is mostly smudges. Computer screens have reasonably clear outlines with text that can be enlarged on a monitor or text size setting but remains often more like those irritating CAPTCHA boxes, all wobbly and sliced up. Partial sight can be minimally used by magnification, contrast, and eccentric viewing but for any reasonable way of consuming information, one must step over into the audio world. That means a screen reader or self-voiced reading devices, all using synthetic speech. After 2 years of hard work, a lot of technology evaluation, and countless hours of practice, the audio world now seems natural. I have no problem reconciling myself with this way of reading for the rest of my life, trusting that my hearing and hands will not give out on me.

My portfolio of reading devices

Another reason I would not have been able to predict how I read now, in 2008, is that several products I use constantly had yet to be invented in 2005. Processing power, miniaturization, wireless, and blind-driven inventiveness have produced a stable of devices that complement the PC (or MAC, whatever).

  • The Levels tar Icon is a screen-less Linux hand-held that reads all its menus and text as I cycle through email, news, and web content. The Mobile Manager hand-held fits into a docking station with keyboard and augmented speakers, power, and ports. I use the Icon for email by pop3 from gmail, occasional recordings,RSS feeds of news and podcasts, web browser, and special access to books and newspapers.
  • The American Printing House for the blind book port is another hand-held box with its only user interface a keypad, requiring ear buds or external speakers. Its memory card is loaded from a PC with books, mp3 files, and text. The book port is designed for easy navigation through books and its file systems. Like the Icon, it can also record memos. The APH book port is currently available only used, as the upgrade is having manufacturing problems. I use the book port primarily for books and lengthy synthetic spoken versions of files. A competitor Humanware victor reader stream offers similar reading capabilities, but I have never become comfortable with its navigation techniques, primarily just not my way of working.
  • The latest marvel of reading technology is the > Kurzweil NFB reader that has shrunk the scanner-OCR-reader architecture onto the Nokia N82 platform. well, it could be used also to make phone calls if attached to a phone service. This little guy is great for on-the-fly reading like room service menus, TSA notices stuffed in your luggage, mail, and printed pages lying around. one of the greatest frustrations of print disability is the difficulty of performing normal inter-human transactions where a sighted person hands you a business card or information sheet or agenda and you need that information to take the next step toward your goal. Another frustration is the profusion of junk materials surrounding the little piece of critical action, like amount to pay on a bill, but that’s where family members can be called upon. The KNFB Reader illustrates Kurzweil’s mantra that exponentiation dominates linearity , urging us to think about potentially using far more computing power to overcome our neural deficiencies.
  • The NVDA screen reader , discussed in earlier posting on selection of NVDA , is my PC work horse. It shows amazingly high quality and functionality for a young product, deriving from its free, open source origins driven by a generation of blind tech savvy developers and users seeking an alternative to the proprietary screen readers forged into the rehab-industrial complex. Note: I donate to NVAccess. Unless you need specialized scripts for complex or barely accessible products, such as many enterprise data management systems, NVDA will do well, especially in conjunction with Mozilla products.
  • Another supporting tool necessary for full reading is the Kurzweil 1000 for simplifying and managing scanners, which may have inaccessible and photo-oriented interface managers. Scanned material for submission to a service like Bookshare.org requires considerable editing that is well supported in K1000. I used the K1000 for general editing and spell checking as well as scanner management. Note that the K1000 has its own nice self-voicing practice to assist its operations and editing.


So, that’s all new technology I’ve learned in the past 2 years, ranging from my Identity cane to a suite of talking devices.

Sources of reading materials

What about the representation of the reading materials and where to they come from?

Human narrated audio books

Of course, we are all familiar with humanly recorded audio books, basically a long stream of bits, possibly with some embedded strings that reader technology can identify as section or information markers. The blind-serving organizations like NLS (National Library service) has long provided human narrators, recording media, reading tools, and a library coordinated distribution system. I personally have not tapped into this because the NLS format has only recently become available on the Icon, and, besides, I have a little problem with its paperwork to get myself certified. audible.com is the commercial system integrated with book port and soon the Icon, but I have yet to find the book that compels me to subscribe.

Note added December 08: Other sources of narrated materials are available in podcast format. Librabox podcast delivered book chapters is prolific and well done. Assistive Media Magazine readings extracts popular New Yorker style articles. State services like Arizona Sun Sounds offers books, newspapers, government information.

DAISY, Digital Talking Books and Bookshare.org library

The core technology for representing reading materials is XML, for extended markup Language, in the family of HTML for web pages. Text files have human or automatically added tags, like <title>,which the reader tool interprets for the user, which could be another computer or a human. A special version, DAISY (Digital Accessible Information System) is the interchange format for books. I get most of my books from bookshare.org which uses a copyright exemption to allows volunteers and publishers to contribute texts for distribution to members certified with a print disability who agree not to distribute further, but with free choice of reading tools and locations of materials. For me, this meant I could rebuild my personal library faster than I could donate or throw away my printed books.


The beauty of the bookshare distribution system was immeasurably enhanced by the Icon’s integration of a book search and download capability. If I hear about a New York Times best seller , a classic or a Reader’s choice, I can pull up the Icon book search by title or author, automatically log on to bookshare, download the book, if available, and start reading — in about a minute! Of course, if the book is not available, I can look for an audio at the public library or a commercial service or get a printed copy to scan. Indeed, I am now contributing books selected by my monthly AAUW book club, which takes several hours of work as I learn to expedite scanning and editing with the Kurzweil 1000 system. But it’s gratifying to know this process offers good readings to thousands more people like me. I carry my entire library on my easy reading Book Port categorized as Fiction, Biography, etc. and can also search these books in full text format. This pipeline of easily retrieved and stored books has truly broadened my reading choices with more than enough entertainment, enlightenment, and information.

Not yet available digital book collections

What about all those mass scanned book collections by Google, amazon, Microsoft, etc.? And those PDF e-books? too bad, most of these are not available to me, or very hard to use. The popular Gutenberg and Google book search do provide out-of-copyright materials, but I personally rarely need these. And, as I commented in post on “seeing through Google book search” , I am limited in my research by the image-only presentation of pages from a book search. While PDF is a nearly universal viewable distribution format, the adobe Acrobat reader is always changing its read out loud capabilities, insists on updating itself every use, and generally makes me feel out of sorts, like “when good technologies go bad”, with apologies to the adobe co-founder who was my grad school office mate. PDF accessibility is such a mixed bag, I just convert all PDF files to TXT and live with what I can get out of the results using the Icon, book port, or screen reader. My pet peeve is the need to convert PDF newsletters into TXT when the content could just as well been delivered as the more easily readable HTML. Like many other people, I thought I could buy an ebook and apply a synthetic voice reader but this mode of distribution is verboten by DRM (Digital Rights Management).


Whew, this is getting long, as I inventory my reading experience, but here some the happier parts.

More news than ever from NFB via Bookshare and Icon News Stand

As my vision faded so I could no longer read newsprint comfortably, I kept my NY Times subscription to retain access to the web site. I learned to find the sections of interest, like Editorials and business, and navigate a link path while reading the articles I wanted by the Text Aloud browser toolbar. Ouch, was this cumbersome! Now, I use the NFB Newswire newspaper delivery service offered in the bookshare membership and facilitated by the Icon News Stand application. With one “get new issues” click, I have not only the NY times, but also wall street Journal, Washington Post, San Francisco chronicle, economist, New Yorker, and more. All are structured for reading by publication, issue, section, title, and text. this means I can scan and selectively reads 100s of pages of newsprint in a half hour, an unpredictable benefit of print disability.

Local news, the gaping hole in the infrastructure

Of course, there’s a down side to news reading in that my local newspaper uses a convoluted content management system that seems to split every article into paragraphs that intertwine with advertisements and obituaries. Luckily, there is an RSS that delivers titles and a city feed that offers more official news, but I have yet to find a way to keep up on local events, even using the radio. This is one of the gaping holes in the information infrastructure for print disabled readers. I avidly track Jon Udell’s blog on strategies for Internet citizens for improving community networked information.

RSS feeds as supplementary and primary news sources

Along the lines of the DAISY representation for books is the RSS (real simple syndication) format for feeds that deliver articles and podcasts. This is the key technology for the rest of most of my reading, with over 80 feeds in my Icon RSS client. These bring CNN, Inside Higher Ed, science daily, slate, and many more magazine and news headline style materials. These are complemented by my evolved collection of news, recreational, and technical podcasts. While I really do not know what I am missing, I am thoroughly comfortable that I am keeping up with technology trends through itconversations.com with its interviews with innovators, technation, IEEE spectrum, etc. Rarely is a podcasts a time-waster and I feel myself obligated to listen to keep up. Similarly, a judicious selection of blogs help me track what’s going on in my areas of interest, including accessibility, podcasting media, and, especially this year,politics.


Two cool things about RSS are the ability to hierarchically structure feeds and to exchange feeds among readers. If you want mine, here’s susan’s reading sources , a file that can be imported into your choice of RSS reader or cribbed from in a text editor. Since all navigation in the Icon RSS reader is within a tree, I have a hierarchy of News into general, technology, Politics, and science categories, then further in places into trees of blog or other special content. Since feed updating is time consuming, maybe half an hour, the tree structure allows updating only a single feed or group of feeds, e.g. if I need a politics fix late on a Tuesday primary day. Of course, I also have several mailing lists with associated folders in the Icon email client, keeping up on mdsupport.org,book port, bookshare, NVDA, and icon user discussion lists.

Progressive reading productivity and quality

How progressive are these reading tools? I have been an Internet user since around 1970s. Indeed I found myself on the mailing list of the very first spam message 30 years ago. I subscribed to and made some embarrassing posts in Usenet groups and mailing lists in the 1980s and 1990s and had my first web page around 1993. To me, this surfeit of information is a natural progression. However, when my beloved Icon had to go to the shop for repair, I realized how important were the advances of the past year. I found the web-based RSS readers clumsy and never did get any setup comparable to my Icon trees, menus, and quick read articles.


To be provocative, I estimate my reading productivity now, compared to a few years ago, as about 10:1 in retrieving content available via Internet, wireless, RSS and other clients. Once retrieved, I feel about a 10:1 gain in ability to scan, filter, selectively read or listen to the content. Of course, I cannot get everything I need and occasionally rev up the Icon or PC Firefox web browser for searching and surfing. I’ll discuss my feelings about information overload and reading habits and brain plasticity in the companion post on “Hyperlinks considered Harmful”.


One of the greatest benefits of exploiting vision loss and using these reading tools is that advertising fades into the noise. Given the current economic model for most information services, this makes me a lousy consumer. Well, too bad, I really would like to kick in for a low-cost subscription, say $10, but do not have that opportunity. I’d like to pay $3 for each book I read with funds to the author and publisher, like is occurring for music. But my guilt is assuaged by taking every opportunity to tell in person and virtually about resources I like in hopes that enough people will click the ad links and buy the resources directly. And, much as I love my reading tools, losing vision is costly, nearly $10k for the above tools.

Advice for both sighted and impaired readers

so, you still fully sighted readers should now have a sense of how one vision Loser has replenished her reading vessels with forms of content, like DAISY, and tools that you never heard of and would consider primitive compared to iPhones and quicktime. But, if my claim of 10:1 increased retrieval and 10:1 improved reading hold true, this step over into the audio world is hardly a loss of reading capability. Limited access to certain kinds of material are offset by opportunities to access special content not available to the sighted world, like the bookshare library and the NFB News Line.


For those losing vision, as I have for three years, I urge you to begin tapping into this audio world sooner than your denial and hopes might lead you. Try using a free screen reader and audio conversion tools and get used to gaining more information by audio whenever you feel discomfort with your eyeballs glued to your screens. I hope this article assures you there are many ways to adapt your reading styles to meet your needs, and even to find gains you never dreamed of. You might visit a disability services department at a local university or an assistive technology demo exhibit hall. But beware, that the rehab and disability services personnel are themselves grappling with technology learning curves and are locked into vendor distribution practices that lag behind some of the tools I advocate in this blog. A good starting point, whatever your level of sightedness, are the user stories in nextup.com text to speech blog

For More Information on Assistive Technology

.

Learning to Write By Listening

May 26, 2008

Revamping writing skills is a major phase in vision loss transition

One reason for starting this blog was to regain my writing skills. This post describes my personal techniques for writing while using a screen reader and other assistive tools. A suite of recorded mp3 files illustrate some steps in rewriting and expanding the previous post on Identity Cane.

Most of this post assumes a state of experience comparable to mine three years ago before I became print-disabled. It was hard then to know what questions to ask to prepare myself. I bumbled through using the TextAloud reading application which enabled me to write well enough while I could control the lighting around my PC and begin to experiment with alternative screen reader packages. Unfortunately, I had some truly humbling experiences trying to edit rapidly at review panel meetings with overhead lights bearing down, voices all around, and a formidable web-based panel review system. Following the edict "Do no harm" I recognized a challenge of physical, cognitive, and technological dimensions. I had to admit I was professionally incompetent when it came to writing, ouch!

My model for writing without vision

The basic questions are:

  • What are my accuracy versus speed trade-offs? And, how do I manage them?
  • What tools do I need? And, how do I teach them to myself?
  • How must I change my writing style? What are the new rules of ‘writing by ear’

If you are not sure how this writing process is working, listen to me writing some text using the NVDA screen reader.

The tradeoffs of accuracy and speed

The Accuracy Versus Speed Tradeoff is intrinsic to writing. How fast do you record your thoughts, accepting some level of typing and expression errors, with separate clean-up edits and rewrites? If I type very fast, I make more errors but am better able to record the thoughts and even establish a "flow" mental state. Writing more slowly allows corrections of wording, punctuation, and spelling but risks loss of thread and discouragement from a feeling of slowed progress.

Writing and editing are very different cognitive tasks complicated by operating primarily in listening mode. The input and output parts of the brain must operate together. A document filled with typos is pure agony to correct, causing a cascade of further errors and often destroying the structure of the whole document. One twitch in a edit can remove more than a letter, even a line, sentence, or paragraph. In "computational thinking" terms, the trade-off is to design the interactions of two concurrent processes that interleave events and actions to produce a document with an optimal amount of errors to be removed by even more processing involving editing tools.
I tried several drafting techniques. Writing in long hand notes, outlines, and snippets had worked for 40 years but I could no longer read my hand-writing. Recording into my Icon PDA helped organize my thoughts and extract some pithy phrases from my brain. As my memory has improved to take over former vision-intensive tasks, I have found it possible to mentally compose a paragraph at a time then hold it together long enough to type into the word processor.

Basic writing and Listening Tools

Writing without looking requires several tools, with my choices discussed below:

  • Compositional, for typing, formatting as needed, and editing
  • Spell checker, possibly a style or grammar checker
  • Pre viewer to present the written results as they will be read by sighted, partially sighted, and blind readers
  • Speech tools to read while typing and editing, as well as presentation of the written result
  • Voices to capture alternative audio presentations of written results, as well as feedback on style and tone

My personal process is:

  • Compose in mostly text with minimal HTML markup using Windows NotePad;
  • Use the NVDA screen reader for key and word echo, with punctuation announcement off then on;
  • Copy text into the K1000 tool, applying its fabulous spell checker, listen for errors and speaking flaws using its self-voicing reader, and copy back to Notepad;
  • Listen in several voices, including both female and male, for flaws and nuance of style;
  • Preview in a browser, Mozilla Firefox,to grasp whatever I can see on a large screen and to check links;
  • Copy into wordpress blog editor.

The obviously best choice for writing is the word processor most
familiar to the writer. However, criteria may change as vision degrades. The spell checker may not have visible choices and may not announce its fields to a screen reader. Excess interface elements and functionality can get in the way. Upgrades and transition to a new computer may demand new software purchases. After years of Microsoft Word and Netscape HTML Composer, I finally settled on the combination of Windows Notepad and Kurzweil 1000. The trickiest feature of the ubiquitous Notepad is "word wrap" for lines with very few other ways for a writer to screw up a document. Since I write HTML for my website and blog, using Notepad avoid temptations of fancy pages by not using WYSIWYG. Also Notepad never nags for licenses discount deals, and upgrades,

.
On the upscale side, I needed a scanner manager for books and Other printed stuff. The Kurzweil education Systems 1000 offers not only scanner wrappers but also several word processor features. One is a beautiful spell checker to read context, spell the word,offer alternatives all using its own self-voiced interface. Listen to me and the K1000 spell checker. I also like having a reader with alternative word pronunciation, pausing, and punctuating. However, I occasionally lost text due to lock-up and unpredictable file operations, so I opted for the universal, simple Notepad for composition.

Update December 2008. I am now using the free Jarte editor based on Wordpad. Behaving like the Windows Wordpad, Jarte has a spell checker similar to K1000, multi-document management, and other features. Most importantly, the interface recognizes and cooperates with a screen reader, NVDA for me. Carolina software designers have done a great service for visually impaired writers and should serve as a model for interface developers of other software products. I’ll be upgrading soon to pay for the free version and some extra features.

A screen reader drives writing. by listening

As discussed in NVDA screen reader choice posting, I do not use the conventional expensive screen readers in favor of a free, open source wonder the I expect to rule the future of assistive technology. NVDA allows me to switch among voices, choose key and word echo, and degree of punctuation announced.

Writing and reading by listening has surprising consequences. First, it strongly differentiates sighted readers from those listening who will probably not hear the colon you use to start a list of clauses separated by semi-colons. Second documents must be read multiple times, with and also without punctuation announcements. It is difficult to concentrate on the sentences when every comma, quotation, and dash is read. And it is necessary to hear every apostrophe and other punctuation to locate extraneous as well as missing items.

Synthetic voices alter writing practices

Another suite of editing tools are synthetic voices, which may come as a surprise to many sighted as well as newly unsighted writers and readers. Synthetic voices have dictionaries of pronunciations but inevitably screw up in certain contexts. Is that "Dr." a street or an educational degree title? Is "St. Louis" the city with a saint or a street? is 2 the numeral like two spelled out or too as in also? No matter your screen reader settings and data, your readers may differ. Well, some of this can be tweaked but generally my attitude has been to just live with quirks.

Synthetic voices offer an even more powerful editing feature unknown to most sighted writers. The excellent researcher Clifford Nass" "Wired for Speech" tell how our brains react differently to gender, ethnic, age, personality, and other features of synthetic voices. Even if we know the voice is only a data file, we still confer more authority to male voices and react negatively to perceived aggressive female voices. This allows editors with synthetic voices to identify phrases with a tone that might be perceived as weak, over-bearing, age-related, or introverted. Don’t believe me? Listen to examples of male and female voices.

Note to sighted writers: you might also find these techniques assistive for finding typos, checking style, and evaluating the forcefulness of your writing. Nothing says you have to be visually impaired to try writing by listening.

Complexity becomes more visible with vision loss

When I write my blog, I must address both sighted and unsighted readers. Sighted people see a dull page of text, while people listening to the page or using magnifiers or contrast themes may react differently to a posting on a myriad of textual, graphical, and audible facets. Much of this out of my control as I cannot see the appearance of my pages in your browser, nor do I know if you are listening in a browser or an RSS client. Also, your speech settings, if any, may differ from mine in speed, dictionary, gender and more. .

A very insightful article on writing for accessibility points out the ill effects of complex sentence structures, reliance on punctuation, expectations of emphasis, and unawareness of the span of settings possible on the end users side.
Now, in my technical and business writing days, I was the "queen of convoluted sentences". I just never understood what was wrong with sub-sentences (as long as the sentence parsed ok); rather, I thought it a mark of quality. Whoops, there I did it again. I used a parenthetical phrase that might not be read with parentheses around it. And I relied on a semi-colon to separate sentences. Sorry about that, I’m working hard on this. But, there I made another mistake. I used a contraction which synthetic voices have trouble pronouncing “I’m” when I could say "I am". Abbreviations are also problematic. Should I say ER or E.R. or "Emergency Room’? This is giving me a headache.

The strongest lesson about compensating for vision loss is that ‘Complexity really hurts’. Overly complex things, whether physical or informational, cause accidents and invoke recovery methods. All this wastes precious physical energy. It is easy to be discourage when tasks that could be performed before vision loss are now too expensive in energy or time. But, conversely, I can now see complexity for what is, usually bad design. And, on the brighter side, once the source of complexity is identified, there may be a work-around, a simplification, or a suggestion for a better design. All this conscious adjustment of expression practices may actually be good training for aging more gracefully. Sigh.

Recordings to Illustrate Writing by Listening

The following recordings accompany this posting. Mp3 files may download or launch a player, depending on your browser and computer settings.


  1. Listen to me writing
    shows the screen reader speaking text in Notepad as written and revised.


  2. Spell checking and listening in K1000


  3. Listening in several synthetic voices for gender and other differences

  4. Audio version of this and other posts