Posts Tagged ‘accessibility’

Sandwich Board Signs Are Dangerous!

January 15, 2016

The Costs of Sandwich Board Advertising Signs


  1. Are wooden sandwich board signs dangerous? Are they safe when placed according to city code?
  2. Who pays if there’s an accident between a pedestrian and a sign? How much liability insurance is required of sign owners? How much liability insurance is apportioned to pedestrian accidents within the city budget?
  3. What is the cost/benefit to merchants? citizens? tourists?
    What is the risk/benefit to merchants? citizens? tourists?


Submit your answers below:

Accident report: Sandwich Board Sign Injures Pedestrians on Downtown Prescott Street, October 14 2016

Deceased was walking along Whiskey Way on a nominal weather day using her mobility cane. A careless runner pushed through a crowd of children leaving their school. Several people bumped into each other, with a few falling down.


Deceased attempted to step aside while untangling her cane from a sandwich board advertising sign. Such open wooden frame barriers are positioned approximately every 10 feet. Other pedestrians were also injured as signs broke apart or flattened on the sidewalk. A cascade of signs and bodies caused many additional falls.


Deceased struck her head on a sharp sign edge and a second time as she fell onto the street, unconscious.


Since the signs that caused injuries were legal under the city’s ordinance, no citations were issued. Liability remains to be determined. Lawsuits are expected against the city, merchants, and sign distributors. The careless runner has not been located, probably ducking into a local bar after the chaos, perhaps not even realizing its cause.


Let’s prevent this accident from happening! Any pedestrian is vulnerable to unsafe signage, anywhere. And, about those saw horses and barriers that warn of unsafe pavement, they’re dangerous, too! Tell Prescott City Council to ban advertising sidewalk signs and fix sidewalks that need fixing.


Retire CAPTCHA style thinking, please

December 26, 2011

This is my annual post evaluating progress, if any, in accessibility as measured by the Congressionally recognized Computer Science Education Week outreach website. This year, oh, my, the tone of non-inclusiveness rings so loud and clear.

Regarding the NSF CISE Bits and Bytes article touted by CCCBlog honoring CSEdWeek

Did you know that the *you* of the NSF article touting re CAPTCHA doesn’t include teachers, students, researchers, citizens, and other **subhumans* with sensory or cognitive differences that limit their abilities to pass the barrier of those wiggly lines? Did you know that buying tickets, signing up for social media, commenting on websites, or applying for jobs is a privilege denied to many people otherwise fully human? Did you know the “evil CAPTCHA” is a symbol like drawing a bar across the universal wheelchair badge?

Do you know the stats of qualified individuals not in the CS field due to problems with inaccessible teaching materials, practices, pedagogical tools, and inexperience working with students with different abilities? do you know extraordinary blind developers Mic and James awarded FCC recognition for a free screen reader replacing expensive assistive technology and opening the doors for millions around the world thru NVAccess.org open source project? Do you realize the opportunity loss that educational settings suffer when those who don’t pass the CAPTCHA test aren’t present thereby perpetuating generations of students not exposed to universal design principles and evidence of their benefits?

I’m one of those subhumans excluded by re CAPTCHA, with strong karma from my earlier days as a sighted researcher and educator. May I invite you to another world “beyond the CAPTCHA”.

  • Download iBlinkRadio app for your smart phone to listen to loads of upbeat geek talk on the technology that enables full lives within blind communities. Notice the appreciation for Apple’s strides in usability of mobile products. TrippleClickHome to turn on VoiceOver to go where your eyes aren’t needed.
  • Link through #a11y or #accessibility to Tweetups of professionals who work for inclusion through computational thinking.
  • Read up on good engineering from Chisholm and May “Universal Design for Web Applications” book and dozens of blogs and YouTube videos.
  • You might run across WebVism and Solana social responses to the “evil CAPTCHA” and positive crowd sourcing apps like VizWiz and its companions on AppleVis.com.

Wow, what people outside computer science research and education have accomplished for their own survival and advancement, despite CAPTCHA style thinking.

so, I took my dismay at the NSF/CCC non-inclusive perspective to the CSEdWeek feedback page. Lo, I could not post due to a visual only or something CAPTCHA! What were you thinking, computing association managers, to not require and test for accessibility when you so high mindedly push outreach from the computing field? The Top 10 CCC posts are truly impressive, but the humanity of the computing field exemplifies the world of TAB (temporarily abled body) thinking. It’s great if reCAPTCHA does a bit of good for resurrecting print archives, but there’s an even better story in great technology, social interaction, hard work, and stamina to dismantle the artificial barriers like wiggly lines and garbled audio. And, really, who would think there aren’t ways to pay $.50 to web workers to attack those yet un scrabbled text fragments?

How about making 2012 CSEdWeek truly inclusive? Require and test for accessibility of your websites and messages with modern practices and involving real students, educators, researchers, and citizens with more physical and cognitive diversity than the TAB world promoted so far. No, this isn’t a $10M research initiative but rather remedial work to bring thinking and practices up to a modern level of respect for civil rights and the crucial role of usable technology for everyone.


Retire the CAPTCHA style mentality,, please.

  1. Computing Community Consortium empowering U.S. research

  2. NSF CISE newsletter touting reCAPTCHA using computation with properly equipped humansFunder of BPC, Broadening Participation in Computing
  3. Computer Science Education Week outreach events, needs an accessibility statement and commitment
  4. FCC Broadband Accessibility initiative and winners including home of free open source screen reader NVDA
  5. Is CS Education ready to honor the A.D.A.? the EDUCAUSE perspective

  6. Will CS meet accessibility in 2011? Long way to go!

Beyond Universal Design – Through Multi-Sensory Representations

January 8, 2011

<The following recommendation was offered at the CyberLearning workshop addressed in the previous post on CyberLearning and Lifelong Learning and Accessibility. The post requires background in both accessibility and national funding policies and strategies.


This is NOT an official statement but rather a proposal for discussion. Please comment on the merits.

Motivation: CyberLearning must be Inclusive

To participate fully in CyberLearning, persons with disabilities must be able to apply their basic learning skills using assistive technology in the context of software, hardware, data, documentation,, and web resources. Trends toward increased use of visualizations both present difficulties and open new arenas for innovative applications of computational thinking.

Often, the software, hardware, and artifacts have not been engineered for these users, unforeseen uses, and integration with a changing world of assistive tools. Major losses result: persons with disabilities are excluded or must struggle; cyberlearning experiments do not include data from this population; and insights from the cognitive styles of diverse learners cannot contribute to the growth of understanding of cyberlearning.

Universal Design Goals

Universal design embodies a set of principles and engineering techniques for producing computational tools and real world environments for persons usually far different from the original designers. A broader design space is explored with different trade-offs using results from Science of Design (a previous CISE initiative). Computational thinking emphasizes abstraction to manage representations that lead to the core challenges for users with disabilities and different learning styles. For example, a person with vision loss may use an audio channel of information received by text to speech as opposed to a graphical interface for visual presentation of the same underlying information. The right underlying semantic representation will separate the basic information from its sensory-dependent representations, enabling a wider suite of tools and adaptations for different learners. This approach transcends universal design by tapping back into the learning styles and methods employed effectively by persons with many kinds of disabilities, which may then lead to improved representations for learners with various forms of computational and data literacy…

Beyond Universal Design as Research

beyond Universal Design” suggests that striving for universal design opens many research opportunities for understanding intermediate representations, abstraction mechanisms, and how people use these differently. This approach to CyberLearning interbreeds threads of NSF research: Science of design and computational thinking from CISE +human interaction (IRIS)+many programs of research on learning and assessment. +…

Essential Metadata Requirements

A practical first step is a system of meta-data that clearly indicates suitability of research software and associated artifacts for experimental and outreach uses. For example, a pedagogical software package designed to engage K-12 students in programming through informal learning might not be usable by people who cannot drag and drop objects on a screen. Annotations in this case may serve as warnings that could avoid exclusion of such students from group activities by offering other choices or advising advance preparation. Of course, the limitations may be superficial and easily addressed in some cases by better education of cyberlearning tool developers regarding standards and accessibility engineering.

Annotations also delimit the results of experiments using the pedagogical software, e.g. better describing the population of learners.

In the context of social fairness and practical legal remedies as laid out by the Department of Justice regarding the Amazon Kindle and other emerging technology, universities can take appropriate steps in their technology adoption planning and implementation.

Policies and Procedures to Ensure Suitable Software

For NSF, appropriate meta-data labeling then leads to planning and eventual changes in ways it manages its extensive base of software. Proposals may be asked to include meta-data for all software used in or produced by research. Operationally, this will require pro posers to become familiar with the standards and methods for engineering software for users employing adaptive tools. While in the short run, this remedial action may seem limiting, in the long run the advanced knowledge will produce better designed and more usable software. At the very least, unfortunate uses of unsuitable software may be avoided in outreach activities and experiments.
Clearly, NSF must devise a policy for managing unsuitable software, preferably within a 3 year time frame from inception of a meta-data labeling scheme.

Opportunities for Multi-Sensory Representation Research

Rather than viewing Suitable Software as a penalty system, NSF should find many new research programs and solicitation elements. For example, visual and on visual (e.g. using text-to–speech) or mouse version speech input representations can be compared for learning effectiveness. Since many persons with disabilities are high functioning in STEM, better understanding of how they operate may well lead to innovation representations.

Additionally, many representations taken for granted by scientists and engineers may not be as usable by a wider citizenry with varying degrees of technical literacy. For example, a pie chart instantly understandable by a sighted person may not hold much meaning for people who do not understand proportional representations and completely useless for a person without sight, yet be rendered informative by tactile manipulation or a chart explainer module.

Toward a Better, Inclusive Workforce

Workforce implications are multi-fold. First, a population of STEM tool developers better attuned to needs of persons with disabilities can improve cyberlearning for as much as 10% of the general population. Job creation and retention should improve for many of the estimated 70% unemployed and under-employed persons with disabilities, offering both better qualities of life and reduced lifetime costs of social security and other sustenance. There already exists an active corps of technologically adept persons with disabilities with strong domain knowledge and cultural understanding regarding communities of disabilities. The “curb cuts” principle also suggests that A.D.A. adaptations for persons with disabilities offer many unforeseen, but tacitly appreciated, benefits for a much wider population and at reasonable cost. NSF can reach out to take advantage of active developers with disabilities to educate its own as well as the STEM education and development worlds.

Summary of recommendation

  1. NSF adopt a meta-data scheme that labels cyberlearning research products as suitable or different abilities, with emphasis on the current state of assistive technology and adaptive methods employed by persons with disabilities.

  2. NSF engage its communities in learning necessary science and engineering for learning by persons with disabilities, e.g. using web standards and perhaps New cyberlearning tools developed for this purpose.

  3. NSF develop a policy for managing suitability of software, hardware, and associated artifacts in accordance with civil rights directives to universities and general principles of fairness.

  4. NSF establish programs to encourage innovation in addressing problems of unsuitable software and opportunities to create multiple representations using insights derived from limitations as of software as well as studies of high performing learners with disabilities.

  5. NSF work with disability representing organizations to identify explicit job opportunities and scholarships for developers specializing in cyberlearning tools and education of the cyberlearning education and development workforce.

Note: this group may possibly be
Related
National Center on Technology Innovation

CyberLearning and Learning Cyber: Lifelong and Accessibility Experiences

September 19, 2010

Susan L. Gerhart slger123@gmail.com


Alex Finnarn Alex.Finnarn@yc.edu

White paper for NSF CyberLearning Task force


Background: Alex is completing one year service with AmeriCorps Vista as a educational technology specialist for OLLI, the Osher Lifelong Learning Institute at Yavapai College, also working with Northern Arizona SCORE (Service Corps of Retired Executives) in Prescott Arizona. Susan is a semi-retired computer scientist, translating her experiences with vision loss into education and advocacy for web accessibility and adoption of assistive technology. She is a student of philosophy, history, and economics in OLLI, working with Alex and others on a technology task force, and facilitator of courses on social media and technology and society.


    To make cyber learning effective in the 21st century, it needs to be available for all populations and people who possess a desire to learn.
Current technology has not lived up to this promise. The younger generations of learners have embraced technology adequately with the help of adventurous teachers and innate ability; however, the older generations of learners have met cyber learning with adversity. Oftentimes, the systems they desire to use are not streamlined enough for adequate adoption. Finally, learners with classic accessibility issues, like poor vision, are ignored when online learning tools are designed. By reaching out to these disadvantaged populations, the whole of cyber learning will improve.

Experience with Cyber Learning for Lifelong Learners

OLLI is nationally supported by the Osher Foundation operating at over 100 U.S. independent locations. Yavapai College OLLI has over 600 members selecting peer directed courses from over 50 subjects during each six week session for fees of $130 for five class sessions per year. Courses are often structured around 1/2 hour lectures from The Learning Company supplemented by facilitator moderated discussions and materials. Diverse fare includes computer training (keyboard, Windows, Mac, Internet, Office, Photoshop) as well as rock and roll, art, health, memoir writing current events,, etc.


We asked: Where does CyberLearning assist OLLI activities and courses? What benefits might accrue
from a good technology platform?


We began to place course materials online after conducting a user survey in the spring of 2010. 87% of respondents in the survey reported having Internet access at home, and 79% reported checking their email at least once a day. The majority of the membership for OLLI did indeed have access to and used the Internet; however, none of the classes were able to readily incorporate cyber learning into their curriculum. A few classes tried using an online learning system, but interest peaked early and soon faded into disuse. With an able-bodied, intelligent, and Internet-ready membership, why was this OLLI unable to engage in cyber learning?


From strongly worded survey comments, we derived a “social contract” that members would not be forced into technology but rather be offered optional technology enhancements. Without clear cut cost benefits, such as reduced printing, or measurable improved learning objectives, we focused on outreach to home bound members, interaction with similar institutions for broader curricula opportunities, repositories and sharing within courses, and archiving institutional pictures and stories.


Existing platforms generally failed to attract interest and use from facilitators despite tutorials and assistance. The first problem is privacy, quite appropriate for repeated warnings of phishing and identity theft, but a barrier to sharing when members do not want a public web identity (Facebook aside). Streamlined and flexible entry is essential especially when courses occur in rapid cycles of six weeks. Forums for sharing are sparsely used because members are involved in many personal and community activities. They spend time as desired, but not required, on outside reading, Googling, and reflecting. A crucial feature of OLLI classes is the lack of tests or assessments during the course. Once grading and competition are removed from the classroom, many online platforms become bloated with unnecessary features. Furthermore, the incentive of using an online classroom to take a quiz or study for a test disappears, and a student must rely on innate curiosity to visit an online classroom.


While email and search engine savvy, OLLI members are not cognitively familiar with the models of forums, blogs, wikis, or tweet streams, and because of this, we are faced with introducing both new models and complex platforms together. After some experimentation and testing, we settled on using EDU 2.0, a rapidly growing U.K. based company with a reasonable business model and support, for an online classroom. We also partnered with another interesting venture in an Australian-based U3A, University of the 3rd Age, which offers self-paced courses and repositories available for facilitator adaptation at similar lifelong learning institutions. Although the OLLI membership is predominantly White, well-traveled, and professionally diverse, international thinking and contacts can offer many new opportunities for our OLLI, like an international book club.

Meanwhile, OLLI’s monthly newsletter has been adapted to appear on a WordPress blog with future plans for moderated forums. We are also actively using the college’s interactive TV classroom connection to offer distributed courses to our sister OLLI, expanding their course selection in the process. A long term goal we have is to host joint OLLI Internet-based courses that would take advantage of the country’s pool of retired expertise. However, the really tragic goal of reaching homebound elders in a community lacking public transit remains primarily a function of offering shared rides and a reliance on volunteers working within the public library.

Perhaps a more important goal is “Learning Cyber” or learning “by osmosis” and how social networks and cyber learning are changing our information practices. Why would any sane person use Twitter? How does a grandparent respond to pressure to participate in Facebook in order to see pictures, or monitor children, grandchildren, and vice versa? Does Google always provide correct information? What happens when newspapers open articles to potentially unpleasant community commenting? What is RSS? How does one critically check facts and correct chain emails with political misinformation? Facing complex interactions with Social Security websites, how does one upgrade their skills for PDF, forms, and chat help? Who wrote Wikipedia? When can You Tube, BigThink, and TED supplement the History and Discovery cable television channels? What are our real privacy rights regarding Google, Facebook, and online retailers? Institutions like OLLI provide an informal setting for increasing and assessing the skills of individual Cyber Learners. Our technology initiatives may be more effectively directed at exposure and bridging generations in both technological and chronological senses.

Recommendations

For the continuing improvement of a national Cyber Learning movement, we suggest researchers and developers incorporate, sooner rather than later, constituents from learning environments such as OLLI and similar institutions. We also recommend investigating the educational and technological practices of the two international sources we found most attractive, EDU 2.0 and U3A. The above experience should provide insights into and questions about cross generational Cyber Learning, which will benefit the movement as a whole.

Links


  1. The Bernard Osher Foundation Lifelong Learning Institutes


  2. OLLI Yavapai College, Prescott Arizona


  3. The Learning Company DVD Lectures


  4. “University of the Third Age” international movement


  5. U3A Australia, courses at Griffiths University


  6. EDU 2.0 Free U.K. based Learning Site

How Attention to Accessibility Can Improve Cyber learning

Attention to accessibility for persons with disabilities should be an immediate objective for educating *ALL* constituencies who touch any aspect of Cyber learning. Consider “accessibility” as the practices and technology that enable persons with disabilities using “assistive technologies” to participate fully and comfortably in CyberLearning.


Indeed, there is no choice if the Departments of Justice and Educations follow through on their “Dear College President” letter regarding
fairness in applications of emerging technologies in academic environments. “Accessibility” here means that devices and web sites must support assistive technologies commonly available through special education channels and increasingly appearing in mainstream markets: Screen (text-to-speech) readers, alternative input/output devices, networked tablet readers such as Kindle and iPad, and possibly lab instrumentation and pedagogical software.


As we argued regarding senior learners, citizens and markets must be served by people who differ in many aspects of physical and mental activities. Education workplaces and curricula must adapt to concepts of universal design ancultural diversity.
Fortuitously, adapting to accessibility offers a systematic way of expanding and analyzing design tradeoffs that benefit far more than persons with disabilities. Think about curb cuts originally for wheelchairs and now beneficial to baby strollers, bikers, inattentive walkers, and luggage cart users. In web environments, standards: address usability for persons using screen readers, also causing difficulties for many mobile device user;, facilitate interoperability of browsers and other user agents; and help manage costs of do-overs and long term maintenance.

Recommendations


For CyberLearning to reach its potential and broaden participation, attention to accessibility is not only overdue and inevitable but also a chance to refresh underlying technology as a CyberLearning experience in itself.


1. Web standards such as WCAG 2, provide a fledgling “science of accessibility” in the form of definitions, principles, experimental results, and field trials. Standards and theories evolve by employing high quality peer reviews, broad community input, extensive documentation,continuing debate in blogs and on Twitter, and increasing adoption earlier in cycles of HTML adoption. Professor Richard Ladner’s group at U. Washington contributes in depth traditional graduate and capstone education experiences, experiments, and publications, yielding cohorts of researchers also involved in outreach to K-12 students with disabilities. Furthermore, an engineering paradigm is emerging as “progressive enhancement” supported by static analyzers, and free operational tools (NVDA screen reader and VoiceOver on Macs). This science is a rich area for computational thinking.

2 University and professional organization web sites are often exquisitely poor examples of attention to accessibility, attested to by a recent NSF-funded study, ironically locked behind a professional society pay wall. Why are many Cyber learning organization web sites so bad? Accessibility simply is not a requirement, e.g. look up your own organizational accessibility statement. Is there one, is it followed, who is responsible? Ok, so academics don’t have time to learn or enforce accessibility theory or practice. But, is it acceptable to turn away Students who can otherwise function well in society but face extra barriers in STEM? and where will accessibility aware CyberLearning developers come from? Ouch, should organizations such as NSF and MIT promote inaccessible pedagogical tools such as Scratch?


In fact, we are not talking major engineering feats, but rather well structured pages as in good technical communication, a few lines of code that make forms into relational structures and pictures into captioned objects. The principle is general use of POSH (Plain Old Semantic HTML) from straight text HTML preserved through styles and fancy interactions topped off by seconds of automated compliance analysis and minutes of insightful execution of use cases. However, accessibility in pedagogical software definitely requires fundamental adoption of hooks and interfaces provided by system vendors.


Think of this change as one small step in technical communication and one giant leap forward in understanding and improving human learning performance.


3. Practically speaking, curricula can only have accessibility grafted onto courses and tools rather than taught as separate subjects. But creative and active learning can come into play: interviewing local ADA specialists for requirements and projects; turning off displays and browsing with a screen reader; estimating costs of retrofitting for omitted accessibility requirements; analyzing risks of lost markets and litigation; adding features suggested by audio supplement or alternative output and input channels; ethics and accessibility addenda to assignments. People who love game controllers and touch screen mobile devices should dig these exercises.


4. Specific interventions must be attempted starting with faculty awareness and introduction to the science of accessibility and its economic importance as well as social fairness. Suggested activities: accessibility seminars at educator gatherings; forced overhaul of professional and government sites to match .com and other .gov levels; design contests for students to makeover and create new information resource sites to meet the grand universal design challenge; audit of pedagogical tools, including textbooks, for universal learning objectives encompassing accessibility; release of all disability related publications now imprisoned beyond professional society pay walls; increased awareness of accessibility as a job and professional speciality; recognition of assistive tech as part of user interfaces; rubrics for POSH in technical communications. …


On a personal note, many avid learners gain vision rehabilitation facilitated through a vibrant online culture of blogs and podcasts on emotional, social, education, and technical topics. Visit this world yourself: book clubs and interactive demos at AccessibleWorld; product demos by individual users at BlindCoolTech; more demos and discussions at ACBRadio; and now a community of #accessibility and #a11y gurus and users on Twitter. Off the mainstream, but taking full advantage of CyberLearning while casting a wider net to newly disabled individuals offers a testimony to spontaneous online learning.

The Data Literacy Challenge


Finally, while the above complaints and suggestions are largely remedial, one clear challenge is the equal visualization” of information and data. Portfolio pie charts, rainfall tables, stimulus recovery expenditure maps, timelines, … are all essential for citizen participation and difficult for visually impaired people. Difficult, yes, but can alternative and multiple ways of channeling data into brains be accomplished through the adapted and flexible recognition and reasoning processes developed by visually impaired thinkers such as scientists and engineers? Can these new models of information and modes of interaction then benefit people with less analytical background or resistance to data driven reasoning?Designing cyber learning for the temporarily fully enabled may not only limit those currently working with disabilities but fail to build upon the unique experiences of and qualities of disabilities which we all have intermittently and eventually.

Links


  1. Department of Justice A.D.A. letter to college presidents


  2. W3C web standards and accessibility guidelines

  3. “>
    U. Washington assistive technology and accessibility projects (Richard Ladner)

  4. “>
    Book “Universal Design for Web Applications” by Matt May and Wendy Chisholm


  5. White paper on”Grafting Accessibility onto Computer science Education”, “As Your World Changes” blog, Susan L. Gerhart


  6. Inaccessible article on inaccessibility of academic web sites

  7. newly founded Institute on Cultural Diversity, including persons with disabilities

Disablism: the good, Bad, and Maddening

May 1, 2010


Disablism Day May 1 2010

I’m enjoying Goldfish’s Invitation for Blogging about disablism
day”.

The good about disability

background: print-disabled and legally blind for five years into retirement.

  1. I love technology.And, wow, does disability open your eyes, so to speak, or maybe it’s our ears and brains. For example, I carry my entire rebuilt
    library of over 1000 DAISY books from bookshare.org on a lavaliere booksense along with many GB of podcasts all downloaded via Levelstar Icon Mobile Manager and Docking station. Better reading now than ever in my life, thanks to this technology and the Internet.

  2. I meet many cool people through my disability. The virtual community of #a11y and #accessibility on Twitter are my gurus and heroes, loading up my browser tabs with good articles and forging new links in my mental map of the field. In physical life, I’m the lady with the white cane to ask about
    macular degeneration.

  3. retired and still kicking, my disability + technology background + learning regime have given me a focus for hours a day of accessibility activism as well as outreach. “Turning lemons into lemonade”, they say, but I just call this a lifetime bonus for as long as I can hold it together.

The bad and Other stuff I’m Too Mad To talk about

  1. My very own profession sucks at accessibility and supporting disabilities. As a computer science educator, researcher, developer, and manager I followed the trends of not noticing disabilities, and got some immediate karma. You know where all those unaware developers are coming from? Our very own computer science accreditation and technical programs.

    And even inexcusably worse are
    the leading professional organizations, such as ACM and its decrepit website. Personally, I coughed up $200 for access to a pay wall of articles for my memoirs and on accessibility. A painfully usable digital library interface did not elicit requested help, back channel messages about accessibility problems were ignored, and all I got was a lifetime membership offer and more renewal notices. The ACM motto: “Of course, accessibility is important. But, we don’t know anything about it. Now, please go away”.

  2. If you have or expect a vision problem, don’t move to a place without public transportation! What a difference in my life if only a bus scooted along
    the major crosstown connector street a block from my house! I can take taxis when I don’t have regular driver available, can also ask for rides, but the loss of independence is a daily demoralizer. Worse, when I do get out like a regular pedestrian, drivers enter crosswalks to scare me and I know half the drivers are talking or otherwise not paying attention.

  3. Trying to establish new relationships with fuzzy faces is challenging. At least, it’s easier now that I’m out in the open about vision loss compared with prior years of hiding, but it’s still saddening not to know the details of my lifelong learning classmate features. Like everybody around a table is a talking space suit, I struggle to remember names to connect with voices and body outlines. But, at least I’m really working on people connections, finally.

Vision What do Vision Losers want to know about technology?

April 5, 2010


Hey, I’ve been off on a tangent from writing about adjusting to vision loss rather on a rant about and praise for website accessibility. Also absorbing my blogging efforts was a 2nd run of Sharing and Learning on the Social Web, a lifelong learning course. My main personal tutors remain the wise people of #a11y on Twitter and their endless supply of illuminating blog posts and opinions. You can track my fluctuating interests and activities on Twitter @slger123.

To get back in action on this blog, I thought the WordPress stat search terms might translate into a sort of FAQ or update on what I’ve learned recently. Below are subtopics suggested by my interpretations of the terms people used to reach this blog. Often inaccurately, some people searching for tidbits on movies or books called ‘twilight’ might be surprised to read a review of the memories of an elder gent battling macular degeneration in the 1980s. Too bad, but there are also people searching for personal experience losing vision and on technology for overcoming limitations of vision loss. These folks are my target audience who might benefit from my ramblings and research. By the way, comments or guest posts would be very welcome..


This post focuses on technology while the next post addresses more personal and social issues.

Technology Theme: synthetic speech, screen readers software, eBooks, talking ATM

Terms used to reach this blog

  • stuff for blind people
  • writing for screen readers
  • artificial digital voice mp3
  • non-visual reading strategies
  • book readers for people with legal blind
  • technology for people with a print-disability
  • apps for reading text
  • what are the best synthetic voices
  • maryanne wolf brain’s plasticity
  • reading on smart phones
  • disabled people using technology
  • synthetic voice of booksense
  • technology for legally blind students
  • audio reading devices
  • reading text application
  • synthetic speech in mobile device
  • the use of technology and loss of eyesight
  • installer of message turn into narrator

NVDA screen reader and its voices

    Specific terms on NVDA reaching this blog:

  • NVDA accessibility review
  • voices for nvda
  • nvda windows screen reader+festival tts 1
  • videos of non visual desktop access
  • lag in screen reader speaking keys
  • nvda education accessibility

Terminology: screen reader software provides audio feedback by synthetic voice to users operating primarily on a keyboard, announcing events, listing menus, and reading globs of text.


How is NVDA progressing as a tool for Vision Losers?
Very well with increased acceptance. NVDA (non Visual Desktop Access) is a free screen reader developing under an international project of innovative and energetic participants with support from Mozilla and Yahoo!. I use NVDA for all my web browsing and Windows work, although I probably spend more hours with nonPC devices like the Levelstar Icon for Twitter, email, news, RSS as well as bookSense and Bookport for reading and podcast listening. NVDA continues to be easy to install, responsive, gradually gaining capabilities like Flash and PDF, but occasionally choking from memory hog applications and heavy duty file transfers. Rarely do I think I’m failing from NVDA limitations but I must continually upgrade my skills and complaint about website accessibility (oops, there I go again). Go to:

The voice issue for NVDA is its default startup with a free open source synthesizer called eSpeak. The very flexible youngsters living with TTS (text-to-speech) their whole lives are fine with this responsive voice which can be carried anywhere on a memory stick and adapted for many languages. However, oldsters often suffer from Synthetic voice shock” and run away from the offensive voices. Now devices like Amazon Kindle and the iPod/iTouch gadgets use a Nuance-branded voice quality between eSpeak and even more natural voices from Neo Speech, ATT, and other vendors. Frankly, this senior citizen prefers older robotic style voices for book reading especially when managed by excellent firmware like Bookport Classic from APH. Here’s the deal: (1) give eSpeak a chance then (2) investigate better voices available at Voice and TextAloud Store at Nextup.com. Look carefully at licensing as some voices work only with specific applications. The main thing to remember is that your brain can adapt to listening via TTS with some practice and then you’ll have a world of books, web pages, newspapers, etc. plus this marvelous screen reader.

Apple Mania effects on Vision Losers

Translation:What are the pro and con arguments for switching to Apple computers and handheld devices for their built in TTS?
Good question. Screenless Switcher is a movement of visually impaired people off PCs to Macs because the latest Mac OS offers VoiceOver text-to-speech built in. Moreover, the same capabilities are available on the iPhone, iTouch, and iPad, with different specific voices. Frankly, I don’t have experience to feel comfortable with VoiceOver nor knowledge of how many apps actually use the built-in capabilities. I’m just starting to use an iTouch (iPod Touch) solely for experimentation and evaluation. So far, I haven’t got the hang of it, drawing my training from podcasts demonstrating iPhone and iTouch. Although I consider myself skilled at using TTS and synthetic speech, I have trouble accurately understanding the voice on the iTouch, necessary to comfortably blend with gesturing around a tiny screen and, gulp, onscreen keyboard. There’s a chicken-and-egg problem here as I need enough apps and content to make the iTouch compelling to gain usage fluency but need more fluency and comfort to get the apps that might hook me. In other words, I’m suffering from mild synthetic voice shock compounded by gesture shyness and iTunes overload.


My biggest reservation is the iTunes strong hold on content and apps because iTunes is a royal mess and not entirely accessible on Windows, not to mention wanting to sell things I can get for free. Instead of iTunes, I get my podcasts in the Levelstar Icon RSS client and move them freely to other devices like the Booksense. Like many others with long Internet experrience, such as RSS creator and web tech critic Dave Winer, I am uncomfortable at Apple’s controlling content and applications and our very own materials, limiting users to consumers and not fostering their own creativity. Could I produce this blog on an iPad? I don’t know. Also, Apple’s very innovative approach to design doesn’t result in much help to the web as a whole where everybody is considered competitors rather than collaborators for Apple’s market share. Great company and products, but not compelling to me. The Google OS Android marketplace is more open and will rescue many apps also developed for Apple products but doesn’t seem to be yet accessible at a basic level or in available apps. Maybe 2010 is the year to just listen and learn while these devices and software and markets develop while I continue to live comfortably on my Windows PC, Icon Mobile Manager and docking station, and book readers. Oh, yeah, I’m also interested in Gnome accessibility, but that’s a future story.

The glorious talking ATM

Terms used to reach this blog

  • talking ATM instructions
  • security features for blind in ATM


What could be more liberating than to walk up to a bank ATM and transact your business even if you cannot see the screen? Well, this is happening many locations and is an example for the next stage of independence: store checkout systems. Here’s my experience. Someone from the bank or experienced user needs to show you where and how to insert your card and ear buds plug. After that the ATM should provide instructions on voice adjustment and menu operations. You won’t be popular if you practice first time at a busy location or time of day, but after that you should be as fast as anybody fumbling around from inside a car or just walking by. Two pieces of advice: (1) pay particular attention to CANCEL so you can get away gracefully at any moment and (2) always remove ear buds before striding off with your cash. I’ve had a few problems: an out of paper or mis-feed doesn’t deliver a requested receipt, the insert card protocol changed from inline and hold to insert and remove, an unwanted offer of a credit card delayed transaction completion, and it’s hard to tell when a station is completely offline. I’ve also dropped the card, sent my cane rolling under a car, and been recorded in profanity and gestures by the surveillance camera. My biggest security concern, given the usual afternoon traffic in the ATM parking lot, is the failure to eject or catch a receipt, which I no longer request. But overall, conquering the ATM is a great step for any Vision Loser. It would also work for MP3 addicts who cannot see the screen on a sunny day.

Using WordPress</h4

Terms:

    >

  • Wordpress blogging platform accessibility >

  • wordpress widget for visual impaired

Translation: (1) Does WordPress have a widget for blog readers with vision impairments, e.g. to increase contrast or text size? (2) Does WordPress editing have adjustments for bloggers with vision impairment?


(2) Yes, ‘screen settings’ provides alternative modes of interaction, e.g. drag and drop uses a combo to indicate position in a selected navigation bar. In general, although each blog post has many panels of editing, e.g. for tags, title, text, visibility, etc. these are arranged in groups often collapsed until clicked for editing, if needed. Parts of the page are labeled with headings (yay, H2, H3,…) that enable a blog writer with a screen reader to navigate rapidly around the page. Overall, good job, WordPress!


However, (1) blog reader accessibility is a bit more problematic. My twitter community often asks for the most accessible theme but doesn’t seem to converge on an answer. Using myself as tester, I find WordPress blogs easy to navigate by headings and links using the NVDA screen reader. But I’m not reading by eyesight so cannot tell how well my own blog looks to either sighted people or ones adjusting fonts and contrasts. Any feedback would be appreciated, but so far no complaints. Frankly, I think blogs as posts separated by headings are ideal for screen reading and better than scrolling if articles are long, like mine. Sighted people don’t grok the semantics of H2 for posts, h3, etc. for subsections, etc. My pet peeve is themes that place long navigation sidebars *before* the contnent rather than to the right. When using a screen reader I need to bypass these and the situation is even worse when the page downloads as a post to my RSS clinet. So, recommendation on WordPress theme: 2 column with content preceding navigation, except for header title and About.

Books. iBooks, eBooks, Kindle, Google Book Search, DAISY, etc.

Terms

  • kindle+accessibility
  • how to snapshot page in google book
  • is kindle suitable for the visually impaired?
  • how to unlock books “from kindle” 1
  • is a kindle good for partially blind peo 1
  • access ability of the kindle

I’ll return to this broad term of readers and reading in a later post. Meantime, here’s an Nytimes Op article on life cycle and ecosystem costs of print and electronic books. My concern is that getting a book into one’s sensory system, whether by vision or audio, is only the first step in reading any material. I’m working on a checklist for choices and evaluation of qualities of reading. More later.

Searching deeper into Google using the Controversy Discovery Engine

You know how the first several results from a Google search are often institutions promoting products or summaries from top ranked websites? These are often helpful but even more useful, substantive, and controversial aspects may be pushed far down in the search list pages. There’s a way to bring these more analytic pages to the surface by easily extending the search terms with words that rarely appear in promotional articles, terms that revolve around controversy and evidence. Controversy Discovery engine assists this expanded searching. Just type in the term as you would to Google and choose from one or both lists of synonym clusters to add to the term. The magic here is nothing more than asking for more detailed and analytic language in the search results. You are free to download this page to your own desktop to avoid any additional tracking of search results through its host site and to have it available any time or if you want to modify its lexicon of synonyms.
Some examples:

  1. “print disability” + dispute
  2. “legally blind” + evidence Search
  3. “NVDA screen reader” + research Search
  4. “white cane” + opinion Search
  5. “Amazon Kindle” accessibility + controversy Search

    Feedback would be much appreciated if you find this deeper search useful.

    Adjustment themes: canes, orientation and mobility, accessibility advocacy, social media, voting, resilience, memories, …

    Coming in next post!

Could TTS news reading beat Kindle and smart phones?

January 27, 2010

This post responds to concerns in ComputingEd post ‘Kindles versus Smart phones: Age matters, testing matters’. A UGa study and commentary focus on news reading as screen-dependant and vision-only. I suggest considering the print-disabled TTS-dependant ecosystem to expand understanding of human reading and assistive device capabilities.

Reading experiments might be broadened to include pure TTS, i.e. no screens. But first, what criteria matter: reading rate, absorption level; device comfort, simulated print experience, distribution costs and convenience,..?


For the record, I just read this article by RSS, then switched to my Newstand, downloaded NYTimes and other papers from Bookshare.org, cooperating with NFB Newsline, and news companies I gratefully thank. Papers are delivered wirelessly in XML-based DAISY format, retrieved and read on a Linux-powered mobile device (Levelstar Icon), spoken in an old-style “robotic voice”. For delivery efficiency and cost, this cannot be beat and I think I absorb selective news reading better than ever. But how is experience of print-disabled news readers factored into comparisons like this article?


This will soon be relevant if Kindle, iPod/iTouch, etc. TTS reading is fully enabled and adopted by some readers from proprietary delivery systems, like Amazon. For proper evaluation, it will be necessary to compare eReading by TTS on mainstream devices to that provided by evolved readers like APH book port, Humanware Victor Reader Stream, PlexTalk Pocket, Levelstar Icon, and (my favorite) GW Micro booksense. Also important is the media format, currently favored as DAISY on these devices. And finally is the provision of media, currently limited legally to print-disabled readers, as by NFB (National Federation of Blind) and non-profit Bookshare.org. In other words, there’s another ecosystem of reading open only to print-disabled that might benefit those attracted to eReading.


Oh, my, here’s the “universal design” mantra again. ‘Reading news by screen’ is, of course, more limited than ‘reading by print or audio”. It’s possible than for some reading criteria the screen-free mode or open XML-based format and its reading devices and experienced reader population may beat mainstream strategies!


Could these experiments be performed? Certainly, most universities have students who currently, or could, offer their experience with equipment provided through Disability Services. Fact quizzes and comprehension tests might raise questions about how our reading brains work and how well our reading devices and formats help or hinder. What research is in progress? Is there a CS agenda for this social and economic ecosystem? Why do people think reading is a vision-only activity? Ok, comics, photos, and crosswords are a bit challenging, but plain old print is so well handled by TTS. Let’s open our eyes and ears and fingers to a fuller range of capabilities. I would love to be a test subject for eReading experiments.

Story: A Screen Reader Salvages a Legacy System

October 30, 2009

This post tells a story of how the NVDA Screen Reader helped a person with vision loss solve a former employment situation puzzle. Way to go, grandpa Dave, and thanks for permission to reprint from the NVDA discussion list on freelists.org.

Grandpa Dave’s Story

From: Dave Mack
To: nvda

Date: Oct 29

Subj: [nvda] Just sharing a feel good experience with NVDA
Hi, again, folks, Grandpa Dave in California, here –
I have hesitated sharing a recent experience I had using NVDA because I know this list is primarily for purposes of reporting bugs and fixes using NVDA. However, since this is the first community of blind and visually-impaired users I have joined since losing my ability to read the screen visually, I have decided to go ahead and share this feel-good experience where my vision loss has turned out to be an asset for a group of sighted folks. A while ago, a list member shared their experience helping a sighted friend whose monitor had gone blank by fixing the problem using NVDA on a pen drive so I decided to go ahead and share this experience as well – though not involving a pen drive but most definitely involving my NVDA screen reader.


Well, I just had a great experience using NVDA to help some sighted folks where I used to work and where I retired from ten years ago. I got a phone call from the current president of the local Federal labor union I belonged to and she explained that the new union treasurer was having a problem updating their large membership database with changes in the union’s payroll deductions that they needed to forward to the agency’s central payroll for processing. She said they had been working off-and-on for almost three weeks and no one could resolve the problem even though they were following the payroll change instructions I had left on the computer back in the days I had written their database as an amateur programmer. I was shocked to hear they were still using my membership database program as I had written it almost three decades ago! I told her I didn’t remember much abouthe dBase programming language but I asked her to email me the original instructions I had left on the computer and a copy of the input commands they were keying into the computer. I told her I was now visually impaired, but was learning to use the NVDA screen reader and would do my best to help. She said even several of the Agency’s programmers were
stumped but they did not know the dBase program language.


A half hour later I received two email attachments, one containing my thirty-year-old instructions and another containing the commands they were manually keying into their old pre-Windows computer, still being used by the union’s treasurer once-a-month for payroll deduction purposes. Well, as soon as I brought up the two documents and listened to a comparison using NVDA, I heard a difference between what they were entering and what my instructions had been. They were leaving out some “dots, or periods, which should be included in their input strings into the computer. I called the Union’s current president back within minutes of receiving the email. Everyone was shocked and said they could not see the dots or periods. I told them to remember they were probably still using a thirty-year-old low resolution computer monitor and old dot-matrix printer which were making the dots or periods appear to be part of letters they were situated between.

Later in the day I got a called back from the Local President saying I had definitely identified the problem and thanking me profusely and said she was telling everyone I had found the cause of the problem by listening to errors non of the sighted folks had been able to see . And, yes, they were going to upgrade their computer system now after all these many years. (laughing) I told her to remember this experience the next time anyone makes a wisecrack about folks with so-called impairments. She said it was a good lesson for all. Then she admitted that the reason they had not contacted me sooner was that they had heard through the grapevine that I was now legally blind and everyone assumed I would not be able to be of assistance. What a mistake and waste of time that ignorant assumption was, she confessed.


Well, that’s my feel good story, but, then, it’s probably old hat for many of you. I just wanted to share it as it was my first experience teaching a little lesson to sighted people in my
own small way. with the help of NVDA. –


Grandpa Dave in California

Moral of the Story: Screen Readers Augment our Senses in Many Ways = Invitation to Comment

Do you have a story where a screen reader or similar audio technology solved problems where normal use of senses failed? Please post a comment.


And isn’t it great that us older folks have such a productive and usable way of overcoming our vision losses? Thanks, NVDA projectn developers, sponsors, and testers.

Crossing the RSS Divide – making it simpler and compelling

September 18, 2009


RSS is a web technology for distributing varieties of content to wide audiences with minimal fuss and delay, hence it’s name “Really Simple Syndication”. However, I’m finding this core capability is less well understood and perhaps shares barriers among visually impaired and older adult web users. This article attempts to untangle some issues and identify good explanatory materials as well as necessary web tools. If, indeed, there is an “RSS Divide” rather than just a poor sample of web users and my own difficulties, perhaps the issues are worth wider discussion.

So, what is RSS?

Several good references are linked below, or just search for “RSS explained”. Here’s my own framework:

Think of these inter-twined actions: Announce, Subscribe, Publish, Fetch, Read/Listen/View:

  1. Somebody (called the “Publisher”) has content you’re welcome to read. In addition to producing descriptive web pages, they also tell you an address where you can find the latest content., i.e. often called a “feed”. These are URLs that look like abc.rss or abc.xml and often have words or graphics saying “RSS”.
  2. When the Publisher has something new written or recorded, they or their software, add an address to this feed, i.e. they “publish”. For example, when I publish this article on WordPress, the text will show up on the web page but also my blog feed will have a new entry. You can keep re-checking this page for changes, but that’ wastes your time, right? And sooner or later, you forget about me and my blog, sniff. Here cometh the magic of RSS!
  3. You (the “Subscriber”) have a way, the RSS client of tracking my feed to get the new article. You “subscribe” to my feed by adding its address to this “RSS client”. You don’t need to tell me anything, like your email, just paste the address in the right place to add to the list of feeds the RSS client manages for you. However, s
  4. Now, dear subscriber, develop a routine in your reading life where you decide, “ok, time to see what’s new on all my blog subscriptions”. So you start your RSS client which then visits each of the subscribed addresses and identifies new content. This “Fetch” action is like sending the dog out for the newspapers, should you have such a talented pet. The client visits each subscribed feed and notes and shows how many articles are new or unread in your reading history.

  5. At your leisure, you read the subscribed content not on the Publisher’s website but rather within the RSS client. Now, that content might be text of the web page, or audio (called podcasts), or video, etc. RSS is the underlying mechanism that brings subscribed content to your attention and action.

What’s the big deal about RSS?

The big deal here is that the distribution of content is syndicated automatically and nearly transparently. Publishers don’t do much extra work but rather concentrate on their writing, recording, and editing of content. Subscribers bear the light burden of integrating an RSS client into their reading routines, but this gets easier, albeit with perhaps too many choices. Basically, RSS is a productivity tool for flexible readers. RSS is especially helpful for those of us who read by synthetic speech so we don’t have to fumble around finding a web site then the latest post — it just shows up ready to be heard.


Commonly emphasized, RSS saves you lots of time if you read many blogs, listen to podcasts, or track news frequently. No more trips to the website to find out there’s nothing new, rather your RSS client steers you to the new stuff when and where you’re ready to update yourself. I have 150 currently active subscriptions, in several categories: news (usatoday, cnet, science daily, accesstech,…); blogs (technology, politics, accessibility, …), some in audio. It would take hours to visit all the websites, but the RSS client spans the list and tells me of new articles or podcasts in a few minutes while I’m doing something else, like waking up. With a wireless connection for my RSS client, I don’t even need to get out of bed!


This means I can read more broadly, not just from saving time, but also having structured my daily reading. I can read news when I feel like tackling the ugly topics of the day, or study accessibility by reading blogs, or accumulate podcasts for listening over lunch on the portico. Time saved is time more comfortably used.

Even more, I can structure and retain records of my reading using the RSS client. Mine arranges feeds in trees so I can skip directly to science if that’s what I feel like. I can also see which feeds are redundant and how they bias their selections.


So, RSS is really a fundamental way of using the Web. It’s not only an affordance of more comfort, but also becoming a necessity. When all .gov websites, local or national, plus all charities, etc. offer RSS feeds, it’s assumed citizens are able to keep up and really utilize that kind of content delivery. For example,>whitehouse.gov has feeds for news releases and articles by various officials that complement traditional news channels with more complete and honestly biased content, i.e. you know exactly the sources, in their own words.


The down side of RSS is overload, more content is harder to ignore. That’s why it’s important to stand back and structure reading sources and measure and evaluate reading value, which is enabled by RSS clients.

Now, about those RSS clients


After 2+ years of happily relying on the Levelstar Icon Mobile Manager RSS client, I’m rather abashed at the messy world of web-based RSS clients, unsure what to recommend to someone starting to adopt feeds.

  1. Modern browsers provide basic support for organizing bookmarks, with RSS feeds as a specific type. E.g. Firefox supports “live bookmarks”, recognizing feeds when you click the URL. A toolbar provides names of feeds to load into tabs. Bookmarks can be categorized, e.g. politics or technology. Various add-on components provide sidebar trees of feeds to show in the main reading window. Internet Explorer offers comparable combinations of features: subscribing, fetching, and reading.

  2. Special reader services expand these browser capabilities. E.g. Google Reader organizes trees of feeds, showing number of unread articles. Sadly, Google Reader isn’t at this moment very accessible for screen readers, with difficult to navigate trees and transfer to text windows. Note: I’m searching for better recommendations for visually impaired readers.
  3. I’ve not used but heard of email based RSS readers, e.g. for Outlook. Many feed subscriptions offer email to mail new articles with you managing the articles in folders or however you handle email.
  4. Smart phones have apps for managing feeds, but here again I’m a simple cell phone caller only, inexperienced with mobile RSS. I hear Amazon Kindle will let you buy otherwise free blogs.
  5. Since podcasts are delivered via feeds, services like Itunes qualify but do not support full-blown text article reading and management.

So, I’d suggest first see if your browser version handles feeds adequately and try out a few. Google Reader, if you are willing to open or already have a Google account, works well for many sighted users and can be used rather clumsily if you’re partially sighted like me. Personally, when my beloved Icon needs repair, I find any of the above services far less productive and generally put my feed reading fanaticism on hiatus.

Note: a solid RSS client will export and import feeds from other clients, using an OPML file. Here is Susan’s feeds on news, technology, science, Prescott, and accessibility with several feeds for podcasts. You’re welcome to save this file and edit out the feed addresses or import the whole lot into your RSS client.

Is there more to feeds in the future?

You betcha, I believe. First, feed addresses are data that are shared on many social media sites like Delicious feed manager. This enables sharing and recommending blogs and podcasts among fans.


A farsighted project exploiting RSS feeds is Jon Udell’s Elm City community calendar project. The goal is to encourage local groups to produce calendar data in a standard format with categorization so that community calendars can be merged and managed for the benefit of everybody. Here’s the Prescott Arizona Community Calendar.


The brains behind RS are now working on more distributed real-time distribution of feeds, Dave Winer’s Scripting News Cloud RSS project.


In summary, those who master RSS will be the “speed readers” of the web compared to others waiting for content to show up in their email boxes or wading through ads and boilerplate on websites. Indeed, many of my favorite writers and teachers have websites I’ve never personally visited but still read within a day of new content. This means a trip to these websites is often for the purpose of commenting or spending more time reviewing their content in detail, perhaps over years of archives.

References on RSS

  1. What is RSS? RSS Explained in simple terms

  2. Video on RSS in Plain English
    emphasizing speedy blog reading in web-based RSS readers


  3. Geeky explanations of RSS from Wikipedia

  4. Whitehouse.gov RSS links and explanation (semi-geeky)

  5. Examples of feeds
  6. Diane Rehm podcast show feed

Amazon Kindle, Arizona State, Accessibility — What a mess!

July 6, 2009

The ACB and NFB lawsuit against ASU-Amazon textbook test program is a big deal for discrimination activism and an educational opportunity on accessibility. The detailed complaint explains difficulties of blind student textbook use at ASU and how adoption of the Amazon Kindle trial program will set a bad example.


The textbook program has caught Television public interest reports on journalism student Darrell Shandrow. Comments in Chronicle.com Wired Campus report on the lawsuit have invoked understanding and support mixed with political outrage at ADA accommodations. Related issues on Reading Rights activism on publisher/author control over text-to-speech cloud the issues of accessibility of the Kindle device itself.


The purpose of this post is to inject my own opinion as well as link to some useful resources.
I speak as a former educator who struggled with textbook bulk and price; software engineer with spoken interface development experience;
and research manager with technology transfer background.
Although not a member of either ACB or NFB, I am a
visually impaired avid reader living hours a day with text-to-speech and affectionate owner of many assistive tools described in this blog.
Oh, yeah, also an Arizona resident with some insight into ASU programs and ambitions.


I try to untangle the arguments my own way. Based on my own ignorance of a few years ago, I suspect many sighted people and those in process of losing vision lack understanding of how audio reading works. I provide a recorded demo of myself working the menus of a device for book and news reading to show the comparable capabilities lacking in the Amazon Kindle.
By the way, I’ve never seen, fondled, or considered buying a Kindled.

Untangling the Kindle-ASU-textbook Arguments

  1. Text-to-speech (TTS) is the work horse, the engine, for assistive technology (AT) for visually impaired (VI) people. TTS reads content as well as providing a spoken interface for menus, forms, selections, and other user operations. Nothing novel, implemented in dozens of devices on the market, standard expected functionality to support accessibility.
  2. Amazon product designers included TTS presumably to provide a talking interface for mobile, hands-full users. TTS could read unlocked books, news, or documents downloaded to the Kindle, but only on Kindle software and rights management platforms. This established TTS as a mainstream commodity functionality, much like a spell checker as expected in any text processor.
  3. Book authors reacted that TTS represented a different presentation for which they could not control pricing or distribution. Amazon said, “ok, we’ll flip the default to give publishers control over enabling TTS”. Accessibility activists complained “hey, you just took away an essential attractive feature of the Kindle” and “You authors, don’t you want us to buy your books in a form we can read as immediately as on-screen readers”.
  4. More experience with the Kindle revealed that the TTS capability was not implemented to support the spoken interface familiar for VI people in commodity AT devices. See our downloadable demo and referenced tutorials to understand the critical role of spoken interfaces.
  5. Bummer. The Kindle that promised to become a main stream accessible reading device was a brick, a paperweight, a boat anchor or door jamb if it weighed enough, just an inert object to someone who could not read the buttons or see menus and other interactions. Useless, cutting off 250,000 books and every other kind of content Amazon could funnel into the Kindle. Well there’s always Victor Reader, Bookport, Icon, and a host of other devices we already own plus services like Bookshare, NFB Newsline, NLS reading services, Audible commercial audios, etc. Disappointed, a step forward missed.
  6. Now universities enter the picture with a partnership opportunity to test out the Kindle on textbooks for selected courses, an educational experiment for the next academic year. Rising complaints from students about textbook costs, often $500 per semester, plus chronic dissatisfaction with the packaged all-in-one book has lead to alternative formats, even abandonment, of textbooks in many subjects. Great opportunity here to re-examine educational benefits of a product and distribution system already familiar to tech-greedy students. But would the learning outcomes hold up? Amazon doesn’t say how rigorous this test program would be, but at least there’s be more Kindle-driven classroom feedback.
  7. Uh, oh. Those darned blind students can’t use the Kindle. Can universities block them from Kindle trial courses? or let them in, relying on the established accessible material support practices forced by A.D.A.? This messes up the trial because the total population of students unfortunately includes visually impaired and a range of other disabilities. Of course, there could be an economic winner here to reduce accessible material preparation costs, easily as much as the $500 Kindle when all staff and scanning prep time are included. Or even insights might be gained into how Kindle mitigates learning difficulties for some disabilities. Ouch, though conversely, it could be that reading on-screen amplifies learning difficulties students have overcome with print practices. Well, it’s a trial, an experiment, right? But, actually, this taxpayer and researcher asks, what are the parameters and the point of the trial program with several universities? Huh, just asking, can’t find any detail.
  8. So the well-lawyered NFB and ACB get together and back a long-time accessibility activist and now ASU student in a lawsuit injunction. Why get so huffy and legal? Outsiders don’t know in detail what mediation or requests have already been suggested and rebuffed but, just guessing, these organizations are probably long on experience and short on patience on accessibility issues and promises. There’s a history of Apple pushing onto universities IPods and ITunes when these devices and services weren’t accessible. Settling with the Massachusetts blind services, Apple finally got out an accessible ITunes. Amazon has a legal record on accessibility as does the LSAT.com registration website. I can well understand the reasoning that a big gorilla like Amazon won’t take time for accessibility if it can avoid doing so, for both profit and ego motives. The lawsuit simply says “Time out! Amazon, you can make the Kindle accessible just like standard practice with AT we already use.” And “ASU and other universities, don’t even think of harming VI students or taking on a tainted experiment that excludes VI students”. What’s the hurry, everybody? The textbook problem won’t be solved next year, the market will always be there, so it’s possible to have a trial that’s fair, responsible, and more informative if accessibility is counted in.
  9. Now, even local Phoenix television stations got interested in the story and, wow, what an educational moment! ASU public relations, still not recovered from their Obama honorary degree fiasco, responded with a flat “we have disability services in place. That’s enough!”. But this taxpayer thinks differently. Part of the experiment is rapid delivery of texts and other materials, perhaps challenging or disrupting disability services. And if the Kindle device itself is part of a trial, then what happens with students using alternative, perhaps even superior, technology? Trivia like different pagination in Kindle texts compared with converted texts distributed to VI students might introduce problems. Isn’t this setting up the trial for either (1) obvious bias by exclusion of VI students or (2) additional burden on VI students? Why not just wait until the device is comparable enough that harm is minimized and more is knowable in the long run about learning outcomes and economic models?
  10. But, wait, there might be a real technology barrier here. Software engineers know that the cost of repair for a missing requirement goes way up long after design, becoming deadly after deployment. Accessibility was not a requirement for the reader device although it’s a legal requirement in the university marketplace. Oops, this was a blunder. If the design of the Kindle software permits sliding in functionality like calls to the TTS engine, retrofit might not be too bad. But there’s a browser, keyboard, and lots of interactions that could get tricky. Usability is notably difficult to do well without experimentation and iteration. So, this is just one more case study relevant to the many software engineering texts in the Amazon market.
  11. Finally, as others have commented, regarding the Chronicle.com forum, railing against A.D.A. as an intrusion on public rights, a sign of backwardness for disabled individuals, and general disregard of human rights is, well, sickening. I wish those detractors a broken leg during a health insurance lapse with a long flight of stairs to the rest room. That’s life, bozos, and we’ll all be disabled in the long run.

What is the listening experience? Hear me show you!

I use the Levelstar Icon to download books from Bookshare.org. My library is currently about 1000 books, complemented by daily doses of news feeds and newspapers. I’ve turned this situation into a demo:

download the 15 minute AYWC-reading-demo.mp3 from http://apodder.org/stumbles/
You’ll hear me narrating book downloads and reading. The demo illustrates both (1) TTS reading books and news and (2) working around menus and lists of books do perform operations commonly shown ona screen. This latter capability is the crux of the Kindle accessibility disagreement.


For more information on this device and interface, the Levelstar.com audio tutorials illustrate the standard practice of supplanting screens with voice-enabled menus. For the record, the operating environment is Linux and the designers of the Icon and its partner product APH BraillePlus are blind. Personally, I think the mainstream product capabilities have a lot to learn and gain from the AT industry it has so far excluded. Perhaps, following the Curb Cuts principle even better, universal designs will emerge from this mess.