Archive for January, 2010

Could TTS news reading beat Kindle and smart phones?

January 27, 2010

This post responds to concerns in ComputingEd post ‘Kindles versus Smart phones: Age matters, testing matters’. A UGa study and commentary focus on news reading as screen-dependant and vision-only. I suggest considering the print-disabled TTS-dependant ecosystem to expand understanding of human reading and assistive device capabilities.

Reading experiments might be broadened to include pure TTS, i.e. no screens. But first, what criteria matter: reading rate, absorption level; device comfort, simulated print experience, distribution costs and convenience,..?

For the record, I just read this article by RSS, then switched to my Newstand, downloaded NYTimes and other papers from, cooperating with NFB Newsline, and news companies I gratefully thank. Papers are delivered wirelessly in XML-based DAISY format, retrieved and read on a Linux-powered mobile device (Levelstar Icon), spoken in an old-style “robotic voice”. For delivery efficiency and cost, this cannot be beat and I think I absorb selective news reading better than ever. But how is experience of print-disabled news readers factored into comparisons like this article?

This will soon be relevant if Kindle, iPod/iTouch, etc. TTS reading is fully enabled and adopted by some readers from proprietary delivery systems, like Amazon. For proper evaluation, it will be necessary to compare eReading by TTS on mainstream devices to that provided by evolved readers like APH book port, Humanware Victor Reader Stream, PlexTalk Pocket, Levelstar Icon, and (my favorite) GW Micro booksense. Also important is the media format, currently favored as DAISY on these devices. And finally is the provision of media, currently limited legally to print-disabled readers, as by NFB (National Federation of Blind) and non-profit In other words, there’s another ecosystem of reading open only to print-disabled that might benefit those attracted to eReading.

Oh, my, here’s the “universal design” mantra again. ‘Reading news by screen’ is, of course, more limited than ‘reading by print or audio”. It’s possible than for some reading criteria the screen-free mode or open XML-based format and its reading devices and experienced reader population may beat mainstream strategies!

Could these experiments be performed? Certainly, most universities have students who currently, or could, offer their experience with equipment provided through Disability Services. Fact quizzes and comprehension tests might raise questions about how our reading brains work and how well our reading devices and formats help or hinder. What research is in progress? Is there a CS agenda for this social and economic ecosystem? Why do people think reading is a vision-only activity? Ok, comics, photos, and crosswords are a bit challenging, but plain old print is so well handled by TTS. Let’s open our eyes and ears and fingers to a fuller range of capabilities. I would love to be a test subject for eReading experiments.

CT for Everyone includes Accessibility!

January 24, 2010

This post responds to a solicitation for ideas on “Computational Thinking for Everyone” at This is a more succinct version of previous blog essays aimed at computing science educators and researchers. .

Principle of “Clarifying Mundane Matters”: Use CT to refresh and deepen understanding of seemingly simple problems.

“Appreciate diverse abilities” Principle: Use CT to understand differing human abilities with respect to computational structures.

Multi-level Principle: Literacy, fluency, and CT apply to organizations as well as individuals.

An example domain is web accessibility for print-disabled people who use assistive technology such as screen readers to navigate, read, and interact with web pages. ,I write as a computing professional, self-trained with intermediate skill level and assistive technology consumer experience.

Consider the following mundane tasks: (1) complete the NAP form the CT workshop free PDF; (2) retrieve two papers on CT from ACM Digital Library; (3) find the next upcoming colloquium talk at some CS department; (4) plan and mark the sessions you want at an upcoming conference; (5) retrieve the data set of your locality’s projects from

Such tasks should require only a few minutes, not demanding vision only. Computational thinkers can conceptualize underlying queries, abstractions, and navigation strategies, perhaps expressed with HTML syntax. Indeed, imagine yourself equipped with hearing a synthetic voice announcing events as you TAB and key your way around these document objects. Of course, there may be many representations of, say, a web form, perhaps a table of labels and form field? But how is a screen reader to associate a label to announce with each edit box? Also, a page of departmental activities or a list of search results might be shown as a layout table with styles indicating different roles of text fragments. No go for a screen reader user who must plow through linearly, applying heuristics to induce page components and meaningful descriptions of clusters of text fragments. Does this suggest AI to help the dumb literal screen reader package? Maybe, but is that a good social solution?

Rather, standards can be negotiated so that browsers and screen readers can parsed with semantic identifications and useful descriptions announced to skilled users. Indeed, W3C standards compiled user observations, reasoning principles (perceivable, operable, understandable,robust), common sense, and experience surveys to yield a fledgling “science of accessibility”. Our mundane form problem is standardly prescribed explicit relational notations to pair label text with form elements, adding a line of code to eliminate hours of screen reader user guessing. Semantics for page outlines are simply headings H1,H2,… H6 properly ordered and appropriately worded. Voila, linear or random search is eliminated with further gains in design integrity, maintainability, and search engine positioning. Incidentally, screen reader surveys confirm form labels and poor or no heading structure as main barriers and annoyances.

While the ultimate test is whether the screen reader user is substantially as capable as a sighted performer, engineering practices are readily available. An online evaluator, such as WAVE from can statically analyze and display page structure and flag standards anomalies. Development by “progressive enhancement” builds styling, scripting, and flash onto POSH (Plain Old Semantic HTML). Browsers, especially in mobile devices, and across economic and disability divides are thereby enabled for “graceful degradation”.

The conference schedule problem illustrates bad effects of wrong level or loss of data structure in the delivery format, typically PDF. A conference program is certainly well structured with presentation properties (title, author, abstract, etc.) with relationships to sessions, tracks, and locations. PDF promotes printable or purely visual representations, leaving print-disabled readers with a jumble of text or dependence on sighted interpreters with separated note-taking. Hypertext offers some structure within browser constraints. A non-traditional solution could be the hierarchical document structuring of the widely used open XML-based DAISY specification. Convenient pocket-sized screen-less devices navigate and read DAISY with natural TTS and easy marking or recorded notes. Watch for these capabilities coming soon on mainstream mobile platforms. CT must explore alternative document representations and find the most versatile structure-preserving generation and transformation techniques, especially when visual reading is limited by screen space, ambient conditions, or print disabilities. Moreover, increased offering of government and science data sets demands full utilization of data structure beyond a PDF-crippled distribution strategy.

Honestly, many CS organizations need a makeover for their web sites to keep up with trends now driven by .gov innovations coupled with world-wide web standards. Knowing after vision adaptation and accessibility indoctrination far more than when I was active five years ago, I wonder: where students experience working with persons with disabilities, using assistive technologies; how students with disabilities learn from inaccessible pedagogical tools; how students gain fluency with accessible product presentations; and then become good consumers and caretakers, managers and procurers, developers and trainers in the workforce and personal lives. So, I challenge ‘CT for Everybody’ to use CT to rigorously and responsibly address the above mundane problems and expand CT to formalize the “science of accessibility” for integration into pedagogy and practice. Practically speaking, it’s easy to start by entering your URL into then trace error reports into the standards’ explanations. For a more vibrant experience, install the free, open Windows NVDA screen reader or turn on Mac VoiceOver, turn off your screen, and use CT to accomplish tasks at a more semantic than visual level. Another opportunity is to work with local A.D.A. professionals and evaluate research and pedagogical products and materials with real persons with disabilities.

Using the framework of the Workshop Report, are these examples really CT? In the context of social good and broadening participation, this terminology matters less than that “a visually impaired user of assistive technology almost gave up filling a form requesting a free PDF for lack of labeled form fields”. How mundane! But, what an opportunity loss from multiplying this flaw across form instances and user efforts! My concern is institutional, rather than individual, illiteracy and unFITness. somebody in an organization needs to be responsible for assuring such flaws are removed or never committed, requiring others handling resources and commitment, usually via a published “accessibility statement”. Literacy is a matter of organizational awareness and Fluency yields a favorable outcome for as many people as possible. My suggested remedy is some rigorous thinking and remedial actions that respect standard sand experimental data in the form of complaints and surveys. My hope is that “CT for Everyone” will encompass objectives like “universal design” and increased benefits of CT applied within computer science education ultimately influencing Everybody. Thank you.

References: “Universal Design for Web Applications” by Wendy Chisholm and Matt May; #a11y or #accessibility tagged tweets; the Amazon Kindle settlement from; my blog “As Your World Changes” at