Story: A Screen Reader Salvages a Legacy System

This post tells a story of how the NVDA Screen Reader helped a person with vision loss solve a former employment situation puzzle. Way to go, grandpa Dave, and thanks for permission to reprint from the NVDA discussion list on freelists.org.

Grandpa Dave’s Story

From: Dave Mack
To: nvda

Date: Oct 29

Subj: [nvda] Just sharing a feel good experience with NVDA
Hi, again, folks, Grandpa Dave in California, here –
I have hesitated sharing a recent experience I had using NVDA because I know this list is primarily for purposes of reporting bugs and fixes using NVDA. However, since this is the first community of blind and visually-impaired users I have joined since losing my ability to read the screen visually, I have decided to go ahead and share this feel-good experience where my vision loss has turned out to be an asset for a group of sighted folks. A while ago, a list member shared their experience helping a sighted friend whose monitor had gone blank by fixing the problem using NVDA on a pen drive so I decided to go ahead and share this experience as well – though not involving a pen drive but most definitely involving my NVDA screen reader.


Well, I just had a great experience using NVDA to help some sighted folks where I used to work and where I retired from ten years ago. I got a phone call from the current president of the local Federal labor union I belonged to and she explained that the new union treasurer was having a problem updating their large membership database with changes in the union’s payroll deductions that they needed to forward to the agency’s central payroll for processing. She said they had been working off-and-on for almost three weeks and no one could resolve the problem even though they were following the payroll change instructions I had left on the computer back in the days I had written their database as an amateur programmer. I was shocked to hear they were still using my membership database program as I had written it almost three decades ago! I told her I didn’t remember much abouthe dBase programming language but I asked her to email me the original instructions I had left on the computer and a copy of the input commands they were keying into the computer. I told her I was now visually impaired, but was learning to use the NVDA screen reader and would do my best to help. She said even several of the Agency’s programmers were
stumped but they did not know the dBase program language.


A half hour later I received two email attachments, one containing my thirty-year-old instructions and another containing the commands they were manually keying into their old pre-Windows computer, still being used by the union’s treasurer once-a-month for payroll deduction purposes. Well, as soon as I brought up the two documents and listened to a comparison using NVDA, I heard a difference between what they were entering and what my instructions had been. They were leaving out some “dots, or periods, which should be included in their input strings into the computer. I called the Union’s current president back within minutes of receiving the email. Everyone was shocked and said they could not see the dots or periods. I told them to remember they were probably still using a thirty-year-old low resolution computer monitor and old dot-matrix printer which were making the dots or periods appear to be part of letters they were situated between.

Later in the day I got a called back from the Local President saying I had definitely identified the problem and thanking me profusely and said she was telling everyone I had found the cause of the problem by listening to errors non of the sighted folks had been able to see . And, yes, they were going to upgrade their computer system now after all these many years. (laughing) I told her to remember this experience the next time anyone makes a wisecrack about folks with so-called impairments. She said it was a good lesson for all. Then she admitted that the reason they had not contacted me sooner was that they had heard through the grapevine that I was now legally blind and everyone assumed I would not be able to be of assistance. What a mistake and waste of time that ignorant assumption was, she confessed.


Well, that’s my feel good story, but, then, it’s probably old hat for many of you. I just wanted to share it as it was my first experience teaching a little lesson to sighted people in my
own small way. with the help of NVDA. –


Grandpa Dave in California

Moral of the Story: Screen Readers Augment our Senses in Many Ways = Invitation to Comment

Do you have a story where a screen reader or similar audio technology solved problems where normal use of senses failed? Please post a comment.


And isn’t it great that us older folks have such a productive and usable way of overcoming our vision losses? Thanks, NVDA projectn developers, sponsors, and testers.

Advertisements

My Accessibility Check: Let’s All Use Our Headings!


Cringing all the time, I am cleaning up my web sites and these blog pages to conform to accessibility standards and my own growing experience with usability. I plan to break down this effort into HTML facets prioritized by the trouble I have using these features as I browse and perform transactions: headings, links, forms, navigation, graphics, etc.


Sighted readers of this post should learn more about the importance of headings to guide low vision and blind readers. New Vision Losers may learn some benefits of and tricks for using a screen reader. Listen to an excellent video on the importance of headings.

The Values of Standards

For this exercise, I will be using the WebAIM simplified WCAG checklist. . The w3 standards are certainly thorough, technically rooted, and well stated. But each facet of web use is complex in its own way with technical lingo related to not only browsers and HTML but also human psychology and usability studies. Even richer are the problems of maintaining and using the ecosystem of trillions of web pages created in only 15 years by several generations of web developers using constantly changing web technology. Anyone approaching a standards activity is faced with numerous trade-offs, in social as well as technical values. So any checklist like this appears neutral about the relative importance of each criterion, leaving it up to accessibility statements to identify their values and responses.

If one were to assess values, questions would include: how much harm would be done by violation of a certain criterion? how many users would be harmed and to what degree? My accessibility checking process is based on my personal difficulties, with occasional harm to me but more often to the web page purveyor if I give up or move away in disgust. I have gradually zoomed in on Headings as a key criterion for usability of a web page design intent and execution, regarding the content and use cases for the web site.

Headings 101

Since the big inning of HTML time, an ea on ago around 1993, came the simple system: H1, H2, H3, H4, H5, H6. Browsers agreed to display these in different font sizes. Headings were a direct take-off on the section structure everyone learns in creating documents:, like chapter, section, subsection, etc. These looked really great in early browsers and conveyed the transferable semantics of sections and subsections, especially when heading wording was carefully crafted so that the headings alone conveyed a good outline of the page.

Then came page styles, with more concern for fonts, colors, and page layout. Standard heading styles didn’t always mesh well with desired look as pages were divided into frames, columns, and navigation bars. So headings became more problematic and lost use. Search engines have been said to apply extra weight to heading terms on the assumption these were chosen to emphasize a section’s purpose and content. But super powerful indexing of terms on a page lessened the impact of headings.

Now, what do the standards say? Right up front, in 1.3.1 calls for semantics on web pages, not only headings but also proper tagging for lists, quotes, and real tables of data. “Semantics” means “meaning”, i.e. a heading covers a block of the page and lower level headings are lower in those sections. Following this logical argument, a page has a subject at level H1, sections that correspond to both content and use of the page. Here come some tricky parts if headings are used faithfully, e.g. what is the level of a search box, or page maintenance information, or, for that matter, the main content of the page?

The Webaxe podcast and blog on accessibility is an excellent tutorial and reference to other webs sources on accessibility. Here is WCAG 2.0 Guidelines for section headings

Rationale for using headings

Usability for screen readers

A screen reader strips out the visual only aspects of web pages and reads out the primary content: headings, lists, tables, graphic descriptions, links, etc. as well as paragraphs. My NVDA screen reader has explicit settings as to which HTML elements to read out as well as settings for the voice, e.g. amount of punctuation and indication of capitalization.


The first thing I do using Firefox and NVDA screen reader reaching a new web page is hit the h key to tour the page by headings, listening to both the descriptions and the levels, trying to build a mental map of the page. I can also use the 1,2,… keys to traverse the page by level of headings.

You tube video demo and appeal for using headings tells the story exceptionally well. With headings on a page, you are off and running into the content immediately. Without headings, I try looking for meaningful links, then lists, tables. Failing along the way, I grow increasingly grumpy if the page is long and must be covered line by line or by tabbing among HTML elements in the top-bottom, left-right reading order.

General readability and write ability

Just as I am never really satisfied with my own heading structure, I react to the intuitive flow of reasoning I discover in a page’s outline. The screen reader has a marvelous way of building or destroying confidence in the underlying design of the page and content. I can “feel” the flow of a page and the thinking of its authors.

With headings, I can drill down into the subjects that interest me, traverse backward and forward in skimming fashion, and maintain an understanding of the page’s content and my location within it. No headings and I must read linearly or traverse lower level elements, which often is appropriate for list elements but not strings of graphics or paragraphs.

Quality control and maintenance

Software engineers gradually learn the value of design, following templates, working top down, modularizing content, and many other principles. We learn that lack of structure will kill us when we need to make changes, which will happen sooner or later. We also understand a process called re factoring that systematically moves functions, expands classes, and regularizes parts of a system. It’s only a belief on my part, but I wonder if a page developer not using headings really knows what the page should be saying. Of course, real life suggests that the original developer is often long gone and the page owners are maintain the page themselves as their situations change. No wonder pages turn out so messy!

Another reason I react to poor heading structure is that I know the page has not been adequately tested with a screen reader and an at tentative human. If the only headings on a page are H5 and H1 that’s better the nothing, but why would the tester not recognize and fix this anomaly? Often this signifies that the page developers are following standards without real understanding or care. Another reason is the industrial origins of screen readers with $1000 price tags and adverse licensing that makes it difficult to test a page. NVDA obviates that reason, but there are still difficulties with the tester’s ability to understand a synthetic voice and work screenless.

Examples of headings

Here are some pages I like and dislike for their use of headings. Other visually impaired readers most likely will have other feelings based on their skills, tools, interest in the web site and content, and mental state.

Good use of headings

  1. W3.org web standards parent makes the tradeoff of using mostly level H2 sections with many additional pages in this large site. More subheadings could be used, it seems to me, for example in the months for presentations on the talks page.
  2. WordPress.com>/a> maintains excellent structure in its templates and working pages, such as tags. However, the main page jumps around from H1 to H6, obviously in search of some look I cannot appreciate via screen reader.
  3. google search results are organized by H2 for sponsored and search results with the results in a list at H3 level. Since headings are also links, this makes browsing a list of links quite rapid. However, in a stroke of inconsistency, news results and some other types of searches are not so tagged with headings., making the results far less useful with a screen reader.

Poor use of headings

  1. Word Web Online has only an H1 and would benefit from subsections for parts of speech, e.g. immediately calling a noun usage and telling other uses. Also sections for other dictionaries and linguistic tools would help.
  2. Association for Computing Machinery acm.org is the premiere computing professional association, that I unfortunately belong to for access to its digital library of publications. The heading structure is H1, H5, H5, H5, H1. What were they thinking? The page is not so badly organized but the heading read out is jarring.
  3. Computing Research Association has no headings or semantic cues. The page is laid out in visual sections but without any of that information transmitted via screen reader.

Exceptions from using standard headings

  1. While main Amazon.com is a royal mess with a page full of links difficult to classify by headings, alternative mobile accessible amazon.com/access has no headings at all. I find this acceptable in the spirit of minimalism that can be traversed in a few tabs or immediately to the product search box.

How did we stray from the wisdom of headings?

One reason for haphazard use of headings is certainly the conflict of the visual appearance of headings with desired look of pages, although this can be cured by style sheets. It is also difficult to reconcile section headings with navigation elements and actions from use cases on the same page as descriptive content. However, my bet is that a little more thinking could come up with palatable heading descriptions that would satisfy a screen reader user as well as a visual user. Additional arguments based on engineering principles for quality and maintenance are difficult to teach within software engineering but gradually become the stuff of bitter experience for truly professional web developers.

So, what do I advise?

  1. Use as many headings on your pages as you have logical groups of elements. If This one step is the most accessible step you can make for the broadest range of users.
  2. Try but don’t fuss too much over the true hierarchy, i.e. an H4 under an H2 or H2 topics not really at the same level. Using a screen reader will be much easier although the anomalies will be noticeable. However, each anomaly is something to question about your overall page structure.
  3. Of course, there are really no-win situations. An example is the use of headings within a blog post that don’t fit into the levels of a posting list as in wordpress tag surfer.
  4. Test your page using NVDA or a proprietary screen reader listening carefully to the sections and page outline. This is easy. Just start NVDA, set the preferences to the page elements you want, bring up Firefox with your page and type the h key around your headings. Other browsers may perform differently and you might need a more soothing synthetic voice but this should be part of any test environment.

Synthetic Voice Shock Reverberates Across the Divides!

Synthetic Voice Shock — oh, those awful voices!


As I communicate with other persons with progressive vision loss, I often sense a quite negative reaction to synthetic, or so-called ‘robotic’, voices that enable reading digital materials and interfacing with computers. Indeed, that’s how I felt a few years ago. Let’s call this reaction "synthetic voice shock" as in:

  • I cannot understand that voice!!!
  • The voice is so inhuman, inexpressive, robotic, unpleasant!
  • How could I possibly benefit from using anything that hard to listen to?
  • If that’s how the blind read, I am definitely not ready to take that step.

Conversely, those long experienced with screen readers and reading appliances may be surprised at these adverse reactions to the text-to-speech technology they listen to many hours a day. They know the clear benefits of such voices, rarely experience difficult understandability, exploit voice regularity and adjustability, and innovate better ways of "living big" in the sighted world, to quote the LevelStar motto.

The ‘Synthetic Speech’ divide


Synthetic voice reactions appear to criss-cross many so-called divides: digital, generational, disability, and developer. The free WebAnywhere is the latest example with a robotic voice that must be overcome in order to gain the possible benefits of its wide dissemination. Other examples are talking ATM centers and accessible audio for voting machines. The NVDA installation and default voice can repel even sighted individuals who could benefit from a free screen reader as a web page accessibility checker or a way to learn about the audio assistive mode. Bookshare illustrates book reading potential by a robotic, rather than natural, voice. Developers of these tools seen the synthetic voice as a means to gain the benefits of their tools while users not accustomed to speech-enabled hardware and software run the other way at the unfriendliness and additional stress of learning an auditory rather than visual sensory practice.


This is especially unfortunate when people losing vision may turn to magnifiers that can only improve spot reading, when extra hours and energy are spent twiddling fonts then working line by line through displayed text, when mobile devices are not explored, when pleasures of book reading and quality of information from news are reduced.

Addressing Synthetic Voice Shock


I would like to turn this posting into messages directed at developers, Vision Losers, caretakers, and rehab personnel.

To Vision Losers who could benefit sooner or later

Please be patient and separate voice quality from reading opportunities when you evaluate potential assistive technology.


The robotic voice you encounter with screen readers is used because it is fast and flexible and widely accepted by the blind community. But there do exist better natural voices that can be used for reading books, news, and much more. While these voices seem initially offensive, synthetic voices are actually one of the great wonders of technology by opening the audio world to the blind and gradually becoming common in telephony and help desks.


As one with Myopic Macular Degeneration forced to break away from visual dependency and embrace audio information, I testify it takes a little patience and self-training and then you hear past these voices and your brain naturally absorbs the underlying content. Of course, desperation from print disability is a great motivator! Once overcoming the resistance to synthetic voices, a whole new world of spoken content becomes available using innovative devices sold primarily to younger generations of educated blind persons. Freed of the struggle to read and write using defective eyesight, there is enormous power to absorb an unbelievable amount of high quality materials. As a technologist myself, I made this passage quickly and really enjoyed the learning challenge, which has made me into an evangelist for the audio world of assistive technology.


If you have low vision training available, ask about learning to listen through synthetic speech. For the rest of our networked lives, synthetic voices may be as important as eccentric viewing and using contrast to manage objects.


So, when you encounter one of these voices, maybe think of them as another rite of passage to remain fully engaged with the world. Also, please consider how we can help others with partial sight. With innovations from web anywhere and free screen readers, like NVDA, there could be many more low cost speaking devices available world wide.

To Those developing reading tools with Text-to-Speech

>


Do not expect that all users of your technology will be converts from within the visually impaired communities familiar with TTS. Provide a voice tuned in pitch and speed and simplicity for starters to achieve the necessary intelligibility and sufficient pleasantness. Suggest that better voices are also available and show how to achieve their use.


It’s tough to spent development effort on such a mundane matter as the voice, but technology adoption lessons show that it only takes a small bit of discouragement to ruin a user’s experience and send a tool they could really use straight into their recycle bin. Demos and warnings could be added to specifically address Synthetic Voice Shock and show off the awesome benefits to be gained. The choice of a freely available voice is a perfectly rational design decision but may indicate a lack of sensitivity to the needs of those newly losing vision forced to learn not only the mechanics of a tool but also how to lis en to this foreign speech.

To Sighted persons helping Vision Losers

>
You should be tech savvy enough to separate out the voice interface from the core of the tool you might be evaluating for a family member or demonstration. Remember the recipient of the installed software will be facing both synthetic voice shock and possibly dependency on the tool as well as long learning curve. Somehow, you need to make the argument that the voice is a help not a hindrance. Of course, you need to be able to understand the voice yourself, perhaps translate its idiosyncrasies, and tune its pitch and speed. A synthetic voice is a killer software parameter.


You may need to seek out better speech options, even outlay a few bucks to upgrade to premium voices or a low cost tool. Amortizing $100 for voice interface over the lifetime hours of listening to valuable materials, maintaining an independent life style, and expanding communication makes voices such a great bargain.


And, who knows, many of the voice-enabled apps may help your own time shifting, multi-tasking, mobile life styles.

To Rehab Trainers

From the meager amount of rehab available to me, the issue of Synthetic Voice Shock is not addressed at all. Eccentric viewing, the principles of contrast for managing objects, a host of useful independent living gadgets, font choices, etc. are traditional modules in standard rehab programs. Perhaps it would be good to have a simple lesson listening to pleasant natural voices combined with more rough menu readers just to show it can be done. Listening to synthetic voices should not be treated like torture but rather like a rite of passage to gain the benefits brought by assistive technology vendors and already widely accepted in the visually impaired communities. Indeed, inability to conquer Synthetic Voice Shock might be considered a disability in itself.


As I have personally experienced, it must be especially difficult to handle Vision Losers with constantly changing eyesight and a mixed bag of residual abilities. It could be very difficult to tell Vision Losers they might fare better reading like a totally blind person. But when it comes to computer technology, that step into the audio world can both reduce stress of struggling to see poorly in a world geared toward hyperactive visually oriented youngsters, especially when print disability opens the flow of quality reading materials, often ahead of the technology curve for sighted people.


The most useful training I can imagine is a session reading an article from AARP or sports Illustrated or New York times editorial copied into a version of TextAloud, or similar application, with premium voices. Close those eyes and just relax and listen and imagine doing that anywhere, in any bodily position, with a daily routine of desirable reading materials. To demonstrate the screen reader aspect, the much maligned Microsoft sam in Narrator can quickly show how menus, windows, and file lists can be traversed by reading and key strokes. The takeaway of such a session should be that there are other, perhaps eventually better, ways of reading print materials and interacting with computers than struggling with deteriorating vision, assuming hearing is sufficient.

So, let us pay attention to Voice Shock


In summary, more attention should be paid to the pattern of adverse reactions of Vision Losers unfamiliar with the benefits of the synthetic speech interaction that enables so many assistive tools and interfaces.

References on Synthetic Voice Shock

  1. Wikipedia on Synthetic Speech. Technical and historical, back to 1939 Worlds Fair.
  2. Wired for Speech, research and book by Clifford Nass. Experiments with effects of gender, ethnicity, personality in perception of synthetic speech.
  3. Audio demonstrations using synthetic speech
  4. NosillaCast podcaster Allison Sheridan interviewing her macular degenerate mother on her new reading device. Everyzing is a general search engine for audio, as in podcasts.
  5. Example of a blog with natural synthetic speech reading. Warning: Political!
  6. Google for ‘systhetic voice online demo’ for examples across the synthetic voice marketplace. Most will download as WAY files.
  7. The following products illustrate Synthetic Voice Shock.
  8. Podcast Interview with ‘As Your World Changes’ blog author covering many issues of audio assistive technology
  9. Audio reading of this posting in male and female voices

Learning to Write By Listening

Revamping writing skills is a major phase in vision loss transition

One reason for starting this blog was to regain my writing skills. This post describes my personal techniques for writing while using a screen reader and other assistive tools. A suite of recorded mp3 files illustrate some steps in rewriting and expanding the previous post on Identity Cane.

Most of this post assumes a state of experience comparable to mine three years ago before I became print-disabled. It was hard then to know what questions to ask to prepare myself. I bumbled through using the TextAloud reading application which enabled me to write well enough while I could control the lighting around my PC and begin to experiment with alternative screen reader packages. Unfortunately, I had some truly humbling experiences trying to edit rapidly at review panel meetings with overhead lights bearing down, voices all around, and a formidable web-based panel review system. Following the edict "Do no harm" I recognized a challenge of physical, cognitive, and technological dimensions. I had to admit I was professionally incompetent when it came to writing, ouch!

My model for writing without vision

The basic questions are:

  • What are my accuracy versus speed trade-offs? And, how do I manage them?
  • What tools do I need? And, how do I teach them to myself?
  • How must I change my writing style? What are the new rules of ‘writing by ear’

If you are not sure how this writing process is working, listen to me writing some text using the NVDA screen reader.

The tradeoffs of accuracy and speed

The Accuracy Versus Speed Tradeoff is intrinsic to writing. How fast do you record your thoughts, accepting some level of typing and expression errors, with separate clean-up edits and rewrites? If I type very fast, I make more errors but am better able to record the thoughts and even establish a "flow" mental state. Writing more slowly allows corrections of wording, punctuation, and spelling but risks loss of thread and discouragement from a feeling of slowed progress.

Writing and editing are very different cognitive tasks complicated by operating primarily in listening mode. The input and output parts of the brain must operate together. A document filled with typos is pure agony to correct, causing a cascade of further errors and often destroying the structure of the whole document. One twitch in a edit can remove more than a letter, even a line, sentence, or paragraph. In "computational thinking" terms, the trade-off is to design the interactions of two concurrent processes that interleave events and actions to produce a document with an optimal amount of errors to be removed by even more processing involving editing tools.
I tried several drafting techniques. Writing in long hand notes, outlines, and snippets had worked for 40 years but I could no longer read my hand-writing. Recording into my Icon PDA helped organize my thoughts and extract some pithy phrases from my brain. As my memory has improved to take over former vision-intensive tasks, I have found it possible to mentally compose a paragraph at a time then hold it together long enough to type into the word processor.

Basic writing and Listening Tools

Writing without looking requires several tools, with my choices discussed below:

  • Compositional, for typing, formatting as needed, and editing
  • Spell checker, possibly a style or grammar checker
  • Pre viewer to present the written results as they will be read by sighted, partially sighted, and blind readers
  • Speech tools to read while typing and editing, as well as presentation of the written result
  • Voices to capture alternative audio presentations of written results, as well as feedback on style and tone

My personal process is:

  • Compose in mostly text with minimal HTML markup using Windows NotePad;
  • Use the NVDA screen reader for key and word echo, with punctuation announcement off then on;
  • Copy text into the K1000 tool, applying its fabulous spell checker, listen for errors and speaking flaws using its self-voicing reader, and copy back to Notepad;
  • Listen in several voices, including both female and male, for flaws and nuance of style;
  • Preview in a browser, Mozilla Firefox,to grasp whatever I can see on a large screen and to check links;
  • Copy into wordpress blog editor.

The obviously best choice for writing is the word processor most
familiar to the writer. However, criteria may change as vision degrades. The spell checker may not have visible choices and may not announce its fields to a screen reader. Excess interface elements and functionality can get in the way. Upgrades and transition to a new computer may demand new software purchases. After years of Microsoft Word and Netscape HTML Composer, I finally settled on the combination of Windows Notepad and Kurzweil 1000. The trickiest feature of the ubiquitous Notepad is "word wrap" for lines with very few other ways for a writer to screw up a document. Since I write HTML for my website and blog, using Notepad avoid temptations of fancy pages by not using WYSIWYG. Also Notepad never nags for licenses discount deals, and upgrades,

.
On the upscale side, I needed a scanner manager for books and Other printed stuff. The Kurzweil education Systems 1000 offers not only scanner wrappers but also several word processor features. One is a beautiful spell checker to read context, spell the word,offer alternatives all using its own self-voiced interface. Listen to me and the K1000 spell checker. I also like having a reader with alternative word pronunciation, pausing, and punctuating. However, I occasionally lost text due to lock-up and unpredictable file operations, so I opted for the universal, simple Notepad for composition.

Update December 2008. I am now using the free Jarte editor based on Wordpad. Behaving like the Windows Wordpad, Jarte has a spell checker similar to K1000, multi-document management, and other features. Most importantly, the interface recognizes and cooperates with a screen reader, NVDA for me. Carolina software designers have done a great service for visually impaired writers and should serve as a model for interface developers of other software products. I’ll be upgrading soon to pay for the free version and some extra features.

A screen reader drives writing. by listening

As discussed in NVDA screen reader choice posting, I do not use the conventional expensive screen readers in favor of a free, open source wonder the I expect to rule the future of assistive technology. NVDA allows me to switch among voices, choose key and word echo, and degree of punctuation announced.

Writing and reading by listening has surprising consequences. First, it strongly differentiates sighted readers from those listening who will probably not hear the colon you use to start a list of clauses separated by semi-colons. Second documents must be read multiple times, with and also without punctuation announcements. It is difficult to concentrate on the sentences when every comma, quotation, and dash is read. And it is necessary to hear every apostrophe and other punctuation to locate extraneous as well as missing items.

Synthetic voices alter writing practices

Another suite of editing tools are synthetic voices, which may come as a surprise to many sighted as well as newly unsighted writers and readers. Synthetic voices have dictionaries of pronunciations but inevitably screw up in certain contexts. Is that "Dr." a street or an educational degree title? Is "St. Louis" the city with a saint or a street? is 2 the numeral like two spelled out or too as in also? No matter your screen reader settings and data, your readers may differ. Well, some of this can be tweaked but generally my attitude has been to just live with quirks.

Synthetic voices offer an even more powerful editing feature unknown to most sighted writers. The excellent researcher Clifford Nass" "Wired for Speech" tell how our brains react differently to gender, ethnic, age, personality, and other features of synthetic voices. Even if we know the voice is only a data file, we still confer more authority to male voices and react negatively to perceived aggressive female voices. This allows editors with synthetic voices to identify phrases with a tone that might be perceived as weak, over-bearing, age-related, or introverted. Don’t believe me? Listen to examples of male and female voices.

Note to sighted writers: you might also find these techniques assistive for finding typos, checking style, and evaluating the forcefulness of your writing. Nothing says you have to be visually impaired to try writing by listening.

Complexity becomes more visible with vision loss

When I write my blog, I must address both sighted and unsighted readers. Sighted people see a dull page of text, while people listening to the page or using magnifiers or contrast themes may react differently to a posting on a myriad of textual, graphical, and audible facets. Much of this out of my control as I cannot see the appearance of my pages in your browser, nor do I know if you are listening in a browser or an RSS client. Also, your speech settings, if any, may differ from mine in speed, dictionary, gender and more. .

A very insightful article on writing for accessibility points out the ill effects of complex sentence structures, reliance on punctuation, expectations of emphasis, and unawareness of the span of settings possible on the end users side.
Now, in my technical and business writing days, I was the "queen of convoluted sentences". I just never understood what was wrong with sub-sentences (as long as the sentence parsed ok); rather, I thought it a mark of quality. Whoops, there I did it again. I used a parenthetical phrase that might not be read with parentheses around it. And I relied on a semi-colon to separate sentences. Sorry about that, I’m working hard on this. But, there I made another mistake. I used a contraction which synthetic voices have trouble pronouncing “I’m” when I could say "I am". Abbreviations are also problematic. Should I say ER or E.R. or "Emergency Room’? This is giving me a headache.

The strongest lesson about compensating for vision loss is that ‘Complexity really hurts’. Overly complex things, whether physical or informational, cause accidents and invoke recovery methods. All this wastes precious physical energy. It is easy to be discourage when tasks that could be performed before vision loss are now too expensive in energy or time. But, conversely, I can now see complexity for what is, usually bad design. And, on the brighter side, once the source of complexity is identified, there may be a work-around, a simplification, or a suggestion for a better design. All this conscious adjustment of expression practices may actually be good training for aging more gracefully. Sigh.

Recordings to Illustrate Writing by Listening

The following recordings accompany this posting. Mp3 files may download or launch a player, depending on your browser and computer settings.


  1. Listen to me writing
    shows the screen reader speaking text in Notepad as written and revised.


  2. Spell checking and listening in K1000


  3. Listening in several synthetic voices for gender and other differences

  4. Audio version of this and other posts

Virtual Stocking Stuffers for Vision Losers

To overcome my life-long tendency to emulate Scrooge at this time of the year, I am happy to share some pointers to gadgets, gear, and comfort items I have come to appreciate especially in my first full year of diminished vision.


Now, is this theme about stockings that are virtual or are the stuffers of a virtual kind? Both, really, these are things one might want to buy for oneself or for a Vision Loser family member or acquaintance. One thing I have learned is that cost is more than money. The overhead of making a purchase, tracking receipts and accounts, setting up a working version of something, and integrating it into my routine takes a precious commodity — physical and mental energy. Any gift that reduces energy load and doesn’t require disproportionately more energy to acquire and maintain is especially helpful to Vision Losers.


First, the “free” stuff, meaning worth a trial and consideration for investing learning time. I have written about the nvAccess, an open source screen reader nvda project based in Australia. This remains my mainstay for reading text and navigating screens, getting better all the time. This organization is also a great place for an end-of-year donation as are other vision-assisting organizations like mdSupport.org, information and community for macular degenerates.

Based on interviews and recommendations within the blind community, as heard on ACB Radio Main Menu, Accessible World, <a and Blind Cool Tech, I am starting to use vision-avoiding software FileDir and TextPal from Jamall Mazrui, a Microsoft-oriented developer. Downloadable FileDir sets up easily with a gazillion shortcuts and menu entries that expand and provide an alternative model for Windows Explorer, notably tagging files and directions as opposed to extending selections, talking responses to actions, and conversion to text of PDF, DOC, and other less speaking applications. Accessible Software has other utilities to try.


What every Vision Loser learning to type with reduced vision needs is a really good spelling checker that reads mis-spelled words, suggestions, and context. Kurzweil 1000 has by far the best checker but that’s $1000 software, which also supports easy document scanning. Since I use the absolute minimalist Windows Notepad for most typing, exactly because it doesn’t have extra tricky functionality, I am asking my Santa for a stand-alone spelling checker just like K1000 – please, please, please. A neat feature of Google, as related on the Google blog, topic “accessibility” is its ability to correct proper nouns you might hear but cannot spell, giving the most popular spelling on the web.


In the low-cost gift category are the Microsoft mouse models with magnifiers, especially the larger one with extra buttons for assigning functions, as discussed in our early post on “Mouse Hacks”. Don’t forge to strip this gift of its hard plastic cover which can stymie just about any human let alone someone who can’t see where to poke a sharp instrument. Avoid a trip to the emergency war.


For the beginner Vision Loser and a great all-around bargain is TextAloud for nextup.com to read saved documents or text copied to a clipboard, also converting to mp3 files for digital player listening. With a few checks in your browser menus, you can have a TextAloud toolbar to read pages with an added bonus of of zoom buttons. And don’t forget the premium voices that over-ride the robot-like Microsoft Sam, Mary, and Mike. In fact, if your gift recipient likes to listen to long-playing materials or is picky about voices, you can assemble a small choir of Neospeech, Reals peak, Nuance, Cepstral and other voices at about $30 each. Except for Cepstral, which had license problems, these voices work nicely with nvda screen reader and the documents it reads out.


A surprisingly useful piece of equipment is an external keyboard. Plug in its USB receiver, recline before the warm fireplace, and practice your screen reading skills, like “speed browsing”. Once you have unglued your eyes from a screen, your versatility of skills can promote more degrees of comfort than you might imagine. These full-sized keyboards are available for <$100 from most consumer stores, but it helps to add in a lap board and maybe a wrist rest as faster fingers and a different posture can put a lot of load on thumbs and wrists. Safety-first says my guiding philosophy (previous post) and no need to invite the secondary disability of repetitive strain injuries.


The world of so-called Independent Living Aids has some amazing stuff. I use more than I had expected a little sensor and voiced reader that tells me the color of clothes, so I less often pack mis-matched blue and black for a trip. It’s cute, saying “blue” in kind of a tentative voice, requiring a good window of natural sunlight, and, unfortunately, failing to tell me when I leave home with a sweater on wrong side out. My next consumer goals are lables for just about everything and a system for finding the stuff I mis-place.


If your Vision Loser has reached the certifiable level of print disability, congratulations, memberships in Bookshare.org is available at $75 + a trip to eye doctor for the certification. 35,000 books, many recent best sellers and a host of disability-related texts, await someone who needs to expand or replace physical book collections. A voiced reader is needed, on PC or hand-held. Bookshare will be expanding rapidly as a provider under U.S. Department of Education grant funds of textbooks to print-disabled students across the U.S. within limits of student eligibility and publisher constraints. Moreover, a constellation of book clubs is now starting up at Friends of Bookshare chat room. Bookshare propagates the National Federation of the Blind Newsline to deliver newspapers right to your doorstep.


Switching over to hand-held reading appliances, new this year is the Victor Reader Stream from Humanware. I prefer the Bookport from American Publishing House for the Blind which is unfortunately out of stock until components are available for the next major release. The Stream, like the Bookport, is about the size of a pack of cards, with content loaded onto its storage card from a PC, then reading text with a synthetic voice. Digital Talking Books from Bookshare. podcasts, other mp3 files and all kinds of memos can be copied to the Stream and annotated using its voice recorder. Of course, just like the teens get for gifts, there’re all kinds of accessory ear bud’s, mini-speakers, even incorporated into pillows (hint, hint!).


Way up the ladder of costs is the remarkable Icon PDA from Levelstar at $1400 + optional promised $400 docking station. Integrated with Bookshare, working well with a home wireless network, and containing fully functional email, browser, and RSS/podcast clients, the Icon is with this Vision Loser hours a day. In fact, my newspapers are delivered without getting out of bed, along with a first pass at email, podcasts, and many mailing lists. I suppose my TV still works, if I could find the remote, but the Icon provides most of the news I used to get from papers and magazines and TV. In fact, my favorite radio and TV shows , Lehrer news hour and WAMU Diane Rehm, are available in podcast format. And the speed of reading using the Icon is amazing, with no page flipping, and, of course, no need to recycle piles of paper. I would not put the Icon into the hands of someone yet to become comfortable with synthesized voices, but there’s no need for learning a screen reader with an Icon, because there is no screen, only voiced menus. And Le`velstar provides an exceptional set of podcast tutorials, including upgrade changes.


And I, this geeky Vision Loser, offer a free podcatcher, @Podder from apodder.org. While other podcatchers, like on the Icon, provide convenient download and, listen, and throw away podcasts, @Podder supports collections of podcasts on hobbies, news, whatever someone might think worth collecting to listen to later, for reference or repeat enjoyment. In fact, this blog is sprinkled with web pages of podcasts from a growing library of over 2000 podcasts on eyesight-related topics. For the more advanced listener, here are OPML files if you want to track accessibility progress or listen into the lively blind community podcasts and blogs eyesigh related blogs and podcast. Use Podzinger audio search to find podcasts of specif eyesight topics.


But, for all the good cheer my geeky devices bring me, my immediate geographical community is disappointing. There is only one bus, making mainly the mall route hourly. A community center was built within walking distance of my home but without even a sidewalk, requiring a stretch of walking next to traffic in a bike lane. The only mobility trainer in the county is booked for months, so I cannot get the training I need for more comfortable and safe traveling. The local newspaper is a loss for website browsing, not available on Newsline, limiting my awareness of local events. Ok, the U.S. has such wealth, but skewed priorities against disability, a bitter lesson for the newly disabled. At least, next year I will be back on a level playing field for health insurance with Medicare. If only one of the vacant over-priced houses in my neighborhood could be converted to social services, then independent Vision Losers, with many more Baby Boomers soon to have failing eyesight, could make the transition more gracefuly, safely,, and productively. A lump of coal to those who cannot see the value of taxes as investments in the younger, the older, and the differently abled. And a heap more coal to the many who don’t realize this basic truth: “Designing for the disabled produces better products for all” because the disabled expose the design flaws and suggest solutions the “fully abled” would not think of.


Please visit @Podder collected podcasts on eyesight topics for a broad sampling of the news, reviews, personal revelations, and activist actions of trickle-down helpfulness from the blind community.

Look, ma, no screens!! nvda, non-Visual Desktop Access, is my new Reader.

Summary: This Vision Loser makes the transition to screen reader dependence, sets up her new tablet notebook with mostly open source apps, and learns many painful new routines.

As my vision changed over the past year, I started to use Narrator, the minimalist screen reader built into Windows XP speaking in Microsoft Sam. I had seen and heard demos of the standard Freedom Scientific JAWS and GW Micro WindowEyes and also tried the newcomer System Access to Go but could not bring myself to invest the $$ fees and upgrade slippery slope and irreversible learning time. However, something deeper, perhaps my Rebel archetype, said “don’t go with the traditional, but find your own pathway.” After all, I’m not on the “rehab grid”, I pay my own way, I appreciate and understand software, and I have time to experiment.

A short flirtation with the Thunder screen reader supported many of my needs, but was rather, well, quirky. A podcast on ACB Replay and review from Blind Geek Zone introduced the nvda (non visual desktop access) open source, free screen reader from young Michael Current, a blind Australian, and his budding infrastructure nvAccess . A simple install, the quick start on the screen, an easy switch to my own synthetic voices, and a bout of fumbling with the keyboard and I knew this was, for me, “the real thing”.

As luck would have it, my Dell notebook’s screen dissolved and I needed to move my primary connectivity and screen to backup Toshiba tablet now also getting a bit old and precarious. With a new tablet moving into the household, along with the Linux-based Icon PDA and it was time to totally remodel my computing environment and my brains, hands, mouse, and reflex “operating system”.

Any relocation, whether household or computer, is a time of mental and emotional turmoil. What applications should I move, e.g. the text reader discussed earlier, and the voice data files I’ve grown accustomed to? Where are the license keys, the setups’ or links to later versions? Maybe it’s also time to revamp my myriad email accounts now mostly funneled through gmail, which I love-hate? Do I want to commit my new setup to the “stove pipe of evil” — Microsoft office, Internet Explorer, Outlook Express? A month later, I’m trying to distill in this post my painful experiences, with more to come later on gmail and portable apps and recent announcements from Mozilla and IBM.

First, let’s define a “screen reader” as really a “screen listener” which responds to events from the Windows operating system and running applications as the user moves focus around the screen. Usually the OS and applications express themselves with dialog boxes and wait for user requests on menus and buttons. The screen listener picks up information about these events and speaks them through a speech engine and chosen synthetic voice files. This is really complicated because there are so many levels of operating systems and applications software, mechanical and electronic hardware in keyboards and mouse, and users flittering around the screen looking for something with their finger or finger surrogates twitching movements leading to a rapid stream of events to be mediated by the screen listener, vying with other processes for memory resources, preferably without crashing.

Narrator is actually understated in value, as Microsoft software goes. Upon initiation, a dialog warns that you’ll probably want a more robust screen reader for everyday use, but well, here’s Narrator for backup or to get you started. Indeed, one purpose of Narrator is to try to assist Windows installation. If you are unfamiliar with Narrator, go to the Start button and type Run and then Narrator or find and work through the Accessibility Wizard. Narrator will occasionally choke when Windows is in a precarious state, but can usually be counted on to walk through the primary windows on the screen and through the file explorer. Therefore, here’s my

Fundamental rule of survival:

(***) Keep Narrator as a backup and remember how to use it with different types of outage: eyesight, mouse, keyboard, resources. It’s there on the desktop as a shortcut in my 911Emergency folder, on the Windows start menus (added in the users + You + startup directory, and specifically added in the startup directory. Of course, you have to find it first and create a shortcut to copy around. And there’s the Start button + Run + Narrator.

Setting up nvda:

nvda is available from ….with either an installer or a zip extractor version. The installer may be hard to understand voice-wise and may be overkill. nvda has a very important property of being a Portable App that keeps all its files in a single directory that will run from wherever it’s extracted, including a USB memory stick. Portability means that you can walk up to modern Windows systems, plug in the memory stick, start nvda from an autorun or shortcut, and you’re in screen listening mode, albeit maybe not with your accustomed voices.

nvda has a number of Preferences to set up or leave as defaults: speech engine, voice and its speed, how much to read punctuation, and rules of behavior in a browser (called “virtual buffer”).

Each screen reader package has a “modifier” key to be keyed in conjunction with letters and other keys. nvda uses the Insert (INS), which may be found in widely varying places on keyboards: immediately right of space on Toshiba, upper right corner on Motion Computing tablet plastic cover keyboard, and middle right of backspace on my Bluetooth 101 full sized keyboard. One of the hassles, a dread for me, is memorizing the needed keys for the screen reader and my customary applications. It’s boring, never-ending, and I just needed to get over An audio tour on the nvAccess website prodded me to continue trying, even to “RTFM”.

Here’s my memory bank to illustrate a few:

Windows shortcuts: ALT+TAB among windows, ALT+F4 to exist an app, ESC to get out of most dialogs, space or enter to push a button, TAB to move around in a window, right and left to open and close tree views with up and down inside a tree,

Trainer Karen McCall of Karlen Communications in Canada calls this knowledge “literacy” but it is often not learned until needed and then becomes essential. with nvda (or any other screen reader), a user must develop a rhythm of interaction, receiving and interpreting speech feedback, e.g. where a TAB has taken you, within or among applications.

nvda frequent actions in Mozilla Firefox include: “h” to headings, k to links, up down between lines, top to reload, combining with Firefox shortcuts control+F to quick find a phrase, control+k to open a search, control+L to type in a location, control+TAB to move among tabs, control+T to create a new tab. And now the big switcheroo in a screen reader is to notify it you’re in an edit box and don’t want the k and other nvda operations, invoked by Insert+Space, known as “virtual buffer passthrough on or off”, always to be remembered on forms.

Well, to wrap up this post, I highly recommend nvda for partially sighted users. It works unbelievably well, especially considering the price ($0) and ease of setup and portability. It lacks the scripting and maturity of the big $1000 packages but has a corps of open source developers helping out, i.e. nvda has a rapid trajectory of development and improvement. As a developer myself, nvda is inspirational, showing how much one dedicated technical person can accomplish in a remarkably short span of time.

My prejudice toward open source throws some light on my above semi-facetious comment about the “stove pipe of evil”. “Stove pipe” refers to communities that don’t talk to each other very much and only use software within their pipe or area. I’m not implying Microsoft evil empire here but rather that lock-in is a user choice that I do not want for myself. Too often I’ve received email which consists of a paragraph written as a MS WORD which I need to click to launch a big application to read, which assumes I own MS WORD or have its reader working, when a simple text body of a message would be safer (clicking an attachment asks for trouble, like a virus), lighter, and easier to produce. Outlook is OK but too attached to WORD. Internet Explorer has finally provided the tabbed windows available for years in Mozilla Firefox, and is a fine browser, but not attractive to me after Firefox. Where I’m let down now in the open sources space is OpenOffice which is inaccessible with nvda. Mostly, my Rebel says to go follow the path of most freedom and change if it offers the affordability and functionality I need.

More to come on “Portable Apps, a good trend, and ones that work for me”, “Living in the new operating system of Web 2.0 and browsers”, and “untangling and reading gmail”.

Summary: I finally took the big leap away from the screen following the nvda screen reader as I set up a new computing environment better accommodating my changing vision, acting as my own rehab support and tra

REFERENCES: