<The following recommendation was offered at the CyberLearning workshop addressed in the previous post on CyberLearning and Lifelong Learning and Accessibility. The post requires background in both accessibility and national funding policies and strategies.
This is NOT an official statement but rather a proposal for discussion. Please comment on the merits.
Motivation: CyberLearning must be Inclusive
To participate fully in CyberLearning, persons with disabilities must be able to apply their basic learning skills using assistive technology in the context of software, hardware, data, documentation,, and web resources. Trends toward increased use of visualizations both present difficulties and open new arenas for innovative applications of computational thinking.
Often, the software, hardware, and artifacts have not been engineered for these users, unforeseen uses, and integration with a changing world of assistive tools. Major losses result: persons with disabilities are excluded or must struggle; cyberlearning experiments do not include data from this population; and insights from the cognitive styles of diverse learners cannot contribute to the growth of understanding of cyberlearning.
Universal Design Goals
Universal design embodies a set of principles and engineering techniques for producing computational tools and real world environments for persons usually far different from the original designers. A broader design space is explored with different trade-offs using results from Science of Design (a previous CISE initiative). Computational thinking emphasizes abstraction to manage representations that lead to the core challenges for users with disabilities and different learning styles. For example, a person with vision loss may use an audio channel of information received by text to speech as opposed to a graphical interface for visual presentation of the same underlying information. The right underlying semantic representation will separate the basic information from its sensory-dependent representations, enabling a wider suite of tools and adaptations for different learners. This approach transcends universal design by tapping back into the learning styles and methods employed effectively by persons with many kinds of disabilities, which may then lead to improved representations for learners with various forms of computational and data literacy…
Beyond Universal Design as Research
beyond Universal Design” suggests that striving for universal design opens many research opportunities for understanding intermediate representations, abstraction mechanisms, and how people use these differently. This approach to CyberLearning interbreeds threads of NSF research: Science of design and computational thinking from CISE +human interaction (IRIS)+many programs of research on learning and assessment. +…
Essential Metadata Requirements
A practical first step is a system of meta-data that clearly indicates suitability of research software and associated artifacts for experimental and outreach uses. For example, a pedagogical software package designed to engage K-12 students in programming through informal learning might not be usable by people who cannot drag and drop objects on a screen. Annotations in this case may serve as warnings that could avoid exclusion of such students from group activities by offering other choices or advising advance preparation. Of course, the limitations may be superficial and easily addressed in some cases by better education of cyberlearning tool developers regarding standards and accessibility engineering.
Annotations also delimit the results of experiments using the pedagogical software, e.g. better describing the population of learners.
In the context of social fairness and practical legal remedies as laid out by the Department of Justice regarding the Amazon Kindle and other emerging technology, universities can take appropriate steps in their technology adoption planning and implementation.
Policies and Procedures to Ensure Suitable Software
For NSF, appropriate meta-data labeling then leads to planning and eventual changes in ways it manages its extensive base of software. Proposals may be asked to include meta-data for all software used in or produced by research. Operationally, this will require pro posers to become familiar with the standards and methods for engineering software for users employing adaptive tools. While in the short run, this remedial action may seem limiting, in the long run the advanced knowledge will produce better designed and more usable software. At the very least, unfortunate uses of unsuitable software may be avoided in outreach activities and experiments.
Clearly, NSF must devise a policy for managing unsuitable software, preferably within a 3 year time frame from inception of a meta-data labeling scheme.
Opportunities for Multi-Sensory Representation Research
Rather than viewing Suitable Software as a penalty system, NSF should find many new research programs and solicitation elements. For example, visual and on visual (e.g. using text-to–speech) or mouse version speech input representations can be compared for learning effectiveness. Since many persons with disabilities are high functioning in STEM, better understanding of how they operate may well lead to innovation representations.
Additionally, many representations taken for granted by scientists and engineers may not be as usable by a wider citizenry with varying degrees of technical literacy. For example, a pie chart instantly understandable by a sighted person may not hold much meaning for people who do not understand proportional representations and completely useless for a person without sight, yet be rendered informative by tactile manipulation or a chart explainer module.
Toward a Better, Inclusive Workforce
Workforce implications are multi-fold. First, a population of STEM tool developers better attuned to needs of persons with disabilities can improve cyberlearning for as much as 10% of the general population. Job creation and retention should improve for many of the estimated 70% unemployed and under-employed persons with disabilities, offering both better qualities of life and reduced lifetime costs of social security and other sustenance. There already exists an active corps of technologically adept persons with disabilities with strong domain knowledge and cultural understanding regarding communities of disabilities. The “curb cuts” principle also suggests that A.D.A. adaptations for persons with disabilities offer many unforeseen, but tacitly appreciated, benefits for a much wider population and at reasonable cost. NSF can reach out to take advantage of active developers with disabilities to educate its own as well as the STEM education and development worlds.
Summary of recommendation
- NSF adopt a meta-data scheme that labels cyberlearning research products as suitable or different abilities, with emphasis on the current state of assistive technology and adaptive methods employed by persons with disabilities.
- NSF engage its communities in learning necessary science and engineering for learning by persons with disabilities, e.g. using web standards and perhaps New cyberlearning tools developed for this purpose.
- NSF develop a policy for managing suitability of software, hardware, and associated artifacts in accordance with civil rights directives to universities and general principles of fairness.
- NSF establish programs to encourage innovation in addressing problems of unsuitable software and opportunities to create multiple representations using insights derived from limitations as of software as well as studies of high performing learners with disabilities.
- NSF work with disability representing organizations to identify explicit job opportunities and scholarships for developers specializing in cyberlearning tools and education of the cyberlearning education and development workforce.
Note: this group may possibly be
National Center on Technology Innovation
The field of software engineering made notable strides in the 1990s when the Department of Defense promulgated via its contracting operations a Capability Maturity Model supported by a Software Engineering Center (*SEI) at Carnegie-Mellon University. Arguably, the model and resulting forces were more belief-based than experimentally validated, but “process improvement through measurement” became a motivating mantra. For more detail see the over-edited Wikipedia article on CMM.
This post is aimed at accessibility researchers and at managers and developers of products with an accessibility requirement, explicitly or not. Visually impaired readers of this post may find some ammunition for accessibility complaints and for advice to organizations they work with.
The 5 Levels of Maturity Model
Here are my interpretations of the 5 levels of capability maturity focused on web accessibility features:
Chaotic, Undefined. Level 1
Each web designer followed his or her own criteria for good web pages, with no specific institutional target for accessibility. Some designers may know W3C standards or equivalents but nothing requires the designers to use them.
Repeatable but still undefined Level 2
Individual web designers can. through personal and group experience, estimate page size, say in units of HTML elements and attributes. Estimation enables better pricing against requirements. Some quality control is in place, e.g. using validation tools, maybe user trials, but the final verdict on suitability of web sites for clients rests in judgements of individual designers. Should those designers leave the organization, their replacements have primarily prior products but not necessarily any documented experience to repeat the process or achieve comparable quality.
Defined Level 3
Here, the organization owns the process which is codified and used for measurement of both project management and product quality. For example, a wire frame or design tool might be not a designer option but rather a process requirement subject to peer review. Standards such as W3c might be applied but are not as significant for capability maturity as that SOME process is defined and followed.
Managed Level 4
At this level, each project can be measured for both errors in product and process with the goal of improvement. Bug reports and accessibility complaints should lead to identifiable process failures and then changes.
Optimizing Level 5
Beyond Managed Level 4, processes can be optimized for new tools and techniques using measurements and data rather than guesswork. For example, is “progressive enhancement” an improvement or not?” can be analytically framed in terms of bug reports, customer complaints, developer capabilities, product lines expansion, and many other qualities.
How well does CMM apply to accessibility?
Personally, I’m not at all convinced a CMM focus would matter in many environments, but still it’s a possible way to piggy back on a movement that has influenced many software industry thinkers and managers.
Do standards raise process quality?
It seems obvious to me that standards such as W3C raise awareness of product quality issues that force process definition and also provide education on meeting the standards. But is a well defined standard either necessary or sufficient for high quality processes?
An ALT tag standard requires some process point where ALT text is constructed and entered into HTML. A process with any measurement of product quality will involve flagging missing ALT texts which leads to process improvement because it’ is so patently silly to have required rework on such a simple task. Or are ALT tags really that simple? A higher level of awareness of how ALT tags integrate with remaining text and actually help visually impaired page users requires more sensitivity and care and review and user feedback. The advantage of standards is that accessibility and usability qualities can be measured in a research context with costs then amortized across organizations and transformed into education expenses. So, the process improvement doesn’t immediately or repeatably lead to true product quality, but does help as guidance.
Does CMM apply in really small organizations?
Many web development projects are contracted through small one-person or part-time groups. Any form of measurement represents significant overhead on getting the job done. For this, CMM spawned the Personal and Team Software Processes for educational and industrial improvements. Certainly professionals who produce highly accessible web sites have both acquired education and developed some form of personal discipline that involved monitoring quality and conscious improvement efforts.
Should CMM influence higher education?
On the other hand, embedded web development may inherit its parent organization quality and development processes, e.g. a library or IT division of a university. Since the abysmal level of accessibility across universities and professional organizations suggest lack of attention and enforcement of standards is a major problem. My recorded stumbling around Computer Science websites surfaced only one organization that applied standards I followed to navigate web pages effectively, namely, University of Texas, which has a history of accessibility efforts. Not surprisingly, an accessibility policy reinforced with education and advocacy and enforcement led small distributed departmental efforts to better results. Should by lawsuit or even education commitment to educational fairness for persons with disability suddenly change the law of the land, at least one institution stands out as a model of both product and process quality.
Organizations can define really awful processes
A great example of this observation is Unrepentant’s blog and letter to DoJ about PDF testimonies. Hours of high-minded social justice and business case talk was represented in PDF of plaint text on Congressional websites. Not only is PDF a pain for visually impaired people, no matter how much it applies accessibility techniques, the simple fact of requiring an application external to the browser, here Adobe Reader, is a detriment to using the website on many devices such as my Levelstar Icon or smart phones. My bet is that sure enough there’s a process on Congressional websites, gauged to minimize effort by exporting WORD docts into PDF and then a quick upload. The entire process is wrong-headed when actual user satisfaction is considered, e.g. how often are citizens with disabilities and deviant devices using or skipping reading valuable testimony and data? Indeed, WCAG standards hint, among many other items, that, surprise, web pages use HTML that readily renders strings of texts quite well for reading across a wide variety of devices, including assistive technology.
The message here is that a Level 3 process such as “export testimony docs as PDF” is detrimental to accessibility without feedback and measurement of actual end usage. The Unrepentant blogger claims only a few hours of work required for a new process producing HTML, which I gratefully read by listening on the device of my choice in a comfortable location and, best of all, without updating the damned Adobe reader.
Quality oriented organizations are often oblivious about accessibility
The CMM description in the URL at the start of this article is short and readable but misses the opportunity to include headings, an essential semantic markup technique. I had to arrow up and down this page to extract the various CMM levels rather than apply a heading navigation as in this blog post. Strictly speaking the article is accessible by screen reader but I wouldn’t hire the site’s web designer if accessibility were a requirement because there’s simply much more usability and universality well worth applying.
I have also bemoaned the poor accessibility of professional computing organization websites>. Until another generation of content management systems comes along, it’s unlikely to find improvement in these websites although a DoJ initiative could accelerate this effort.
CMM questions for managers, developers, educators, buyers, users
So, managers, are your web designers and organization at the capability level you desire?
How would you know?
- Just sample a few pages in WAVE validator from WebAim.org. Errors flagged by WebAim are worth asking web developers? do these errors matter? how did they occur? what should be changed or added to your process, if any? But not all errors are equally important, e.g. unlabelled forms may cause abandoned transactions and lost sales while missing ALT tags just indicate designer ignorance. And what if WAVE comes up clean? Now you need to validate the tool against your process to know if you’re measuring the right stuff. At the very least, every manager or design client has a automated feedback in seconds from tools like WAVE and a way to hold web developers accountable for widespread and easily correctable flaws.
- Ask for the defined policy. would an objective like W3C standards suffice? Well, that depends on costs within the organization’s process, including both production and training replacements.
- Check user surveys and bug reports. Do these correspond to the outputs of validation tools such as WebAim’s WAVE?
- Most important, check for an accessibility statement and assure you can live with its requirements and that they meet social and legal standards befitting your organizational goals.
Developers, are you comfortable with your process?
Level 1 is often called “ad hoc” or “chaotic” for a reason, a wake up call. For many people, a defined process seems constraining of design flexibility and geek freedom. For others, a process gets out of the way many sources of mistakes and interpersonal issues about ways of working. Something as trivial as a missing or stupid ALT tag hardly seems worthy of contention yet a process that respects accessibility must at some point have steps to insert, and review ALT text, requiring only seconds in simple cases and minutes if a graphic lacks purpose or context, with many more minutes if the process mis-step shows up only in a validator or user test. Obviously processes can have high payoffs or receive the scolding from bloggers like Unrepentant and me if the process has the wrong goal.
Buyers of services or products for web development, is CMM a cost component?
Here’s where high leverage can be attained or lost. Consider procuring a more modern content management system. Likely these vary in the extent to which they export accessible content, e.g. making it easier or harder to provide semantic page outlines using headings. There are also issues of accessibility of the CMS product functions to support developers with disabilities.
In the context of CMM, a buyer can ask the same questions as a manager about a contractor organizations’ process maturity graded against an agreed upon accessibility statement and quality assessment.
Users and advocates, does CMM help make your case?
If we find pages with headings much, much easier to navigate but a site we need to use lacks headings, it’s constructive to point out this flaw. It seems obvious that a web page with only an H4 doesn’t have much process behind its production, but is this an issue of process failure, developer education, or missing requirements? If, by any chance, feedback and complaints are actually read and tracked, a good manager would certainly ask about the quality of the organization’s process as well as that of its products.
Educators,does CMM thinking improve accessibility and usability for everyone?
Back to software engineering, getting to Level 5 was a BFD for many organizations, e.g. related to NASA or international competition with India enterprises. Software engineering curricula formed around CMM and government agencies used it to force training and organizational change. The SEI became a major force and software engineering textbooks had a focus for several chapters on project management and quality improvement. Frankly, as a former software engineering educator, I tended to skim this content to get to testing which I considered more interesting and concrete and relevant.
By the way, being sighted at the time, I didn’t notice the omission of accessibility as a requirement or standards body of knowledge. I have challenged Computing Education blogger and readers to include accessibility somewhere in courses, but given the combination of accreditation strictures and lack of faculty awareness, nothing is likely to happen. Unless, well, hey, enforcement just might change these attitudes. My major concern is that computing products will continue to be either in the “assistive technology ghetto” or costly overhauls because developers were never exposed to accessibility.
Looking for exemplars, good or bad?
Are there any organizations that function at level 5 for accessibility and how does that matter for their internal costs and for customer satisfaction as well as legal requirements?
Please comment if your organization has ever considered issues like CMM and where you consider yourself in a comparable level.
This post tells a story of how the NVDA Screen Reader helped a person with vision loss solve a former employment situation puzzle. Way to go, grandpa Dave, and thanks for permission to reprint from the NVDA discussion list on freelists.org.
Grandpa Dave’s Story
From: Dave Mack
Date: Oct 29
Subj: [nvda] Just sharing a feel good experience with NVDA
Hi, again, folks, Grandpa Dave in California, here –
I have hesitated sharing a recent experience I had using NVDA because I know this list is primarily for purposes of reporting bugs and fixes using NVDA. However, since this is the first community of blind and visually-impaired users I have joined since losing my ability to read the screen visually, I have decided to go ahead and share this feel-good experience where my vision loss has turned out to be an asset for a group of sighted folks. A while ago, a list member shared their experience helping a sighted friend whose monitor had gone blank by fixing the problem using NVDA on a pen drive so I decided to go ahead and share this experience as well – though not involving a pen drive but most definitely involving my NVDA screen reader.
Well, I just had a great experience using NVDA to help some sighted folks where I used to work and where I retired from ten years ago. I got a phone call from the current president of the local Federal labor union I belonged to and she explained that the new union treasurer was having a problem updating their large membership database with changes in the union’s payroll deductions that they needed to forward to the agency’s central payroll for processing. She said they had been working off-and-on for almost three weeks and no one could resolve the problem even though they were following the payroll change instructions I had left on the computer back in the days I had written their database as an amateur programmer. I was shocked to hear they were still using my membership database program as I had written it almost three decades ago! I told her I didn’t remember much abouthe dBase programming language but I asked her to email me the original instructions I had left on the computer and a copy of the input commands they were keying into the computer. I told her I was now visually impaired, but was learning to use the NVDA screen reader and would do my best to help. She said even several of the Agency’s programmers were
stumped but they did not know the dBase program language.
A half hour later I received two email attachments, one containing my thirty-year-old instructions and another containing the commands they were manually keying into their old pre-Windows computer, still being used by the union’s treasurer once-a-month for payroll deduction purposes. Well, as soon as I brought up the two documents and listened to a comparison using NVDA, I heard a difference between what they were entering and what my instructions had been. They were leaving out some “dots, or periods, which should be included in their input strings into the computer. I called the Union’s current president back within minutes of receiving the email. Everyone was shocked and said they could not see the dots or periods. I told them to remember they were probably still using a thirty-year-old low resolution computer monitor and old dot-matrix printer which were making the dots or periods appear to be part of letters they were situated between.
Later in the day I got a called back from the Local President saying I had definitely identified the problem and thanking me profusely and said she was telling everyone I had found the cause of the problem by listening to errors non of the sighted folks had been able to see . And, yes, they were going to upgrade their computer system now after all these many years. (laughing) I told her to remember this experience the next time anyone makes a wisecrack about folks with so-called impairments. She said it was a good lesson for all. Then she admitted that the reason they had not contacted me sooner was that they had heard through the grapevine that I was now legally blind and everyone assumed I would not be able to be of assistance. What a mistake and waste of time that ignorant assumption was, she confessed.
Well, that’s my feel good story, but, then, it’s probably old hat for many of you. I just wanted to share it as it was my first experience teaching a little lesson to sighted people in my
own small way. with the help of NVDA. –
Grandpa Dave in California
Moral of the Story: Screen Readers Augment our Senses in Many Ways = Invitation to Comment
Do you have a story where a screen reader or similar audio technology solved problems where normal use of senses failed? Please post a comment.
And isn’t it great that us older folks have such a productive and usable way of overcoming our vision losses? Thanks, NVDA projectn developers, sponsors, and testers.
RSS is a web technology for distributing varieties of content to wide audiences with minimal fuss and delay, hence it’s name “Really Simple Syndication”. However, I’m finding this core capability is less well understood and perhaps shares barriers among visually impaired and older adult web users. This article attempts to untangle some issues and identify good explanatory materials as well as necessary web tools. If, indeed, there is an “RSS Divide” rather than just a poor sample of web users and my own difficulties, perhaps the issues are worth wider discussion.
So, what is RSS?
Several good references are linked below, or just search for “RSS explained”. Here’s my own framework:
Think of these inter-twined actions: Announce, Subscribe, Publish, Fetch, Read/Listen/View:
- Somebody (called the “Publisher”) has content you’re welcome to read. In addition to producing descriptive web pages, they also tell you an address where you can find the latest content., i.e. often called a “feed”. These are URLs that look like abc.rss or abc.xml and often have words or graphics saying “RSS”.
- When the Publisher has something new written or recorded, they or their software, add an address to this feed, i.e. they “publish”. For example, when I publish this article on WordPress, the text will show up on the web page but also my blog feed will have a new entry. You can keep re-checking this page for changes, but that’ wastes your time, right? And sooner or later, you forget about me and my blog, sniff. Here cometh the magic of RSS!
- You (the “Subscriber”) have a way, the RSS client of tracking my feed to get the new article. You “subscribe” to my feed by adding its address to this “RSS client”. You don’t need to tell me anything, like your email, just paste the address in the right place to add to the list of feeds the RSS client manages for you. However, s
- Now, dear subscriber, develop a routine in your reading life where you decide, “ok, time to see what’s new on all my blog subscriptions”. So you start your RSS client which then visits each of the subscribed addresses and identifies new content. This “Fetch” action is like sending the dog out for the newspapers, should you have such a talented pet. The client visits each subscribed feed and notes and shows how many articles are new or unread in your reading history.
- At your leisure, you read the subscribed content not on the Publisher’s website but rather within the RSS client. Now, that content might be text of the web page, or audio (called podcasts), or video, etc. RSS is the underlying mechanism that brings subscribed content to your attention and action.
What’s the big deal about RSS?
The big deal here is that the distribution of content is syndicated automatically and nearly transparently. Publishers don’t do much extra work but rather concentrate on their writing, recording, and editing of content. Subscribers bear the light burden of integrating an RSS client into their reading routines, but this gets easier, albeit with perhaps too many choices. Basically, RSS is a productivity tool for flexible readers. RSS is especially helpful for those of us who read by synthetic speech so we don’t have to fumble around finding a web site then the latest post — it just shows up ready to be heard.
Commonly emphasized, RSS saves you lots of time if you read many blogs, listen to podcasts, or track news frequently. No more trips to the website to find out there’s nothing new, rather your RSS client steers you to the new stuff when and where you’re ready to update yourself. I have 150 currently active subscriptions, in several categories: news (usatoday, cnet, science daily, accesstech,…); blogs (technology, politics, accessibility, …), some in audio. It would take hours to visit all the websites, but the RSS client spans the list and tells me of new articles or podcasts in a few minutes while I’m doing something else, like waking up. With a wireless connection for my RSS client, I don’t even need to get out of bed!
This means I can read more broadly, not just from saving time, but also having structured my daily reading. I can read news when I feel like tackling the ugly topics of the day, or study accessibility by reading blogs, or accumulate podcasts for listening over lunch on the portico. Time saved is time more comfortably used.
Even more, I can structure and retain records of my reading using the RSS client. Mine arranges feeds in trees so I can skip directly to science if that’s what I feel like. I can also see which feeds are redundant and how they bias their selections.
So, RSS is really a fundamental way of using the Web. It’s not only an affordance of more comfort, but also becoming a necessity. When all .gov websites, local or national, plus all charities, etc. offer RSS feeds, it’s assumed citizens are able to keep up and really utilize that kind of content delivery. For example,>whitehouse.gov has feeds for news releases and articles by various officials that complement traditional news channels with more complete and honestly biased content, i.e. you know exactly the sources, in their own words.
The down side of RSS is overload, more content is harder to ignore. That’s why it’s important to stand back and structure reading sources and measure and evaluate reading value, which is enabled by RSS clients.
Now, about those RSS clients
After 2+ years of happily relying on the Levelstar Icon Mobile Manager RSS client, I’m rather abashed at the messy world of web-based RSS clients, unsure what to recommend to someone starting to adopt feeds.
- Modern browsers provide basic support for organizing bookmarks, with RSS feeds as a specific type. E.g. Firefox supports “live bookmarks”, recognizing feeds when you click the URL. A toolbar provides names of feeds to load into tabs. Bookmarks can be categorized, e.g. politics or technology. Various add-on components provide sidebar trees of feeds to show in the main reading window. Internet Explorer offers comparable combinations of features: subscribing, fetching, and reading.
- Special reader services expand these browser capabilities. E.g. Google Reader organizes trees of feeds, showing number of unread articles. Sadly, Google Reader isn’t at this moment very accessible for screen readers, with difficult to navigate trees and transfer to text windows. Note: I’m searching for better recommendations for visually impaired readers.
- I’ve not used but heard of email based RSS readers, e.g. for Outlook. Many feed subscriptions offer email to mail new articles with you managing the articles in folders or however you handle email.
- Smart phones have apps for managing feeds, but here again I’m a simple cell phone caller only, inexperienced with mobile RSS. I hear Amazon Kindle will let you buy otherwise free blogs.
- Since podcasts are delivered via feeds, services like Itunes qualify but do not support full-blown text article reading and management.
So, I’d suggest first see if your browser version handles feeds adequately and try out a few. Google Reader, if you are willing to open or already have a Google account, works well for many sighted users and can be used rather clumsily if you’re partially sighted like me. Personally, when my beloved Icon needs repair, I find any of the above services far less productive and generally put my feed reading fanaticism on hiatus.
Note: a solid RSS client will export and import feeds from other clients, using an OPML file. Here is Susan’s feeds on news, technology, science, Prescott, and accessibility with several feeds for podcasts. You’re welcome to save this file and edit out the feed addresses or import the whole lot into your RSS client.
Is there more to feeds in the future?
You betcha, I believe. First, feed addresses are data that are shared on many social media sites like Delicious feed manager. This enables sharing and recommending blogs and podcasts among fans.
A farsighted project exploiting RSS feeds is Jon Udell’s Elm City community calendar project. The goal is to encourage local groups to produce calendar data in a standard format with categorization so that community calendars can be merged and managed for the benefit of everybody. Here’s the Prescott Arizona Community Calendar.
The brains behind RS are now working on more distributed real-time distribution of feeds, Dave Winer’s Scripting News Cloud RSS project.
In summary, those who master RSS will be the “speed readers” of the web compared to others waiting for content to show up in their email boxes or wading through ads and boilerplate on websites. Indeed, many of my favorite writers and teachers have websites I’ve never personally visited but still read within a day of new content. This means a trip to these websites is often for the purpose of commenting or spending more time reviewing their content in detail, perhaps over years of archives.
References on RSS
- What is RSS? RSS Explained in simple terms
Video on RSS in Plain English emphasizing speedy blog reading in web-based RSS readers
Geeky explanations of RSS from Wikipedia
- Whitehouse.gov RSS links and explanation (semi-geeky)
- Examples of feeds
- Diane Rehm podcast show feed
Facing safety trade-offs through risk management
It’s time to structure my wanderings and face denial about the special problems of dangers of living with partial eyesight. This post starts a simple framework for analyzing risks and defining responses. Sighted readers may become aware of hassles and barriers presented to Vision Losers who may learn a few tricks from my experience.
Life is looking especially risky right now: financial follies, pirate attacks, natural disasters, ordinary independent activities, … A Vision Loser needs special precautions, planning, and constant vigilance. So, here I go trying to assemble needed information in a format I can use without freaking myself back into a stupor of denial.
Guiding Lesson: Look for the simplest rule that covers the most situations.
Appeals to experts and clever web searches usually bring good information, lots of it, way more than I can use. I discussed this predicament in the context of Literacy when I realized I couldn’t read the pie charts sufficiently well to understand asset allocations. I had 500 simulations from my “wealth manager”, projections to age 95, and my own risk profiles. But what I needed was a simple rule to live by, that fit these, now absurd, models, like
“Live annually on 4% of your assets”.
Another rule, one I obey, that could have saved $trillions is like:
Housing payment not to exceed 1/3 Income.
Such rules help focus on the important trade-offs of what we can and cannot do sensibly rather than get bogged down in complex models and data we can’t fully understand or properly control. If we can abstract an effective rule from a mass of details, then we might be able to refresh the rule from time to time to ask what changes in the details materially affect the rule and what adjustments can cover these changes. We can also use generally accepted rules to validate and simplify our models. This is especially important for the partially sighted since extra work goes into interpreting what can be seen and considerable guess work into what’s out there unseen.
I need comparable safety rules to internalize, realizing their exceptions and uncertainty. Old rules don’t work too well, like “Look both ways before crossing the street”. also listen, but what about silent cars. Or “turn on CNN for weather information” if I can’t read the scrolling banners.
Background from Software risk management
When I taught software engineering, the sections on project management always emphasized the need for Risk Management in the context of “why 90% of software projects fail”. This subject matter made the basis for a good teamwork lab exercise: prioritize the risks for a start up project. I dubbed this hypothetical project Pizza Central, a web site to compare local pizza deals and place orders, with forums for pizza lovers. Since all students are domain experts on both pizza deliveries and web site use, they could rapidly fill out a given template. Comparing results always found a wide divergence of risks among teams, some focused on website outage, others on interfaces, some on software platforms. So, one lesson conveyed among teams was “oops, we forgot about that”. My take-away for them was that this valuable exercise was easy enough to do but required assigned responsibilities for mitigating risks, tracking risk indicators, and sometimes unthinkable actions, like project cancellation.
I am about to try a bit of this medicine on myself now. Risk is a complicated subject, see Wikipedia. I’ll use the term as “occurrence of a harmful event” in the context of a project or activity. The goal is to mitigate both the occurrences and effects of these nasty events. But we also need indicators to tell when an event is ongoing or has happened. Since mitigation has a cost of response both to prevent and recover from events, it helps to have prioritization of events by likelihood and severity. So, envision a spreadsheet with event names, ratings for likelihood, severity, and costs, perhaps with a formula to rank importance. Associated with these events are lists of indicators, proposed mitigation actions with estimated costs. This table becomes part of a project plan with assigned actions for mitigations and risk tracking awareness across team members as a regular agenda item at project meetings..
Risk analysis for my workout/relaxation walk
I will follow this through on the example of my daily workout walk. I do not use my white cane because I feel safe enough, but really, is this a good tradeoff? Without the cane, I can walk briskly, arms swinging, enjoying shadows, tree outlines, and the calls of quail in the brush. The long white cane pushes my attention into the pavement, responding to minor bumps and cracks my strides ignore, and there’s even a rhythm to the pavement that adjusts my pace to a safe sensation. I would not think of walking without my guiding long white cane on a street crowded with consumers or tourists but this walk covers familiar terrain at a time frequented by other recreational walkers. This situation is a trade-off unique to the partially sighted, who only themselves can know what they can safely see and do, living with the inevitable mistakes and mishaps of the physical world.
Here are a few events, with occasional ratings on a 1-10 scale. For this application, I feel it’s more important to ask the right questions, albeit some silly, to surface my underlying concerns and motivate actions.
Event: Struck by lightning, falling tree, or other bad weather hazard
<Indicators<:Strong winds, thunder, glare ice
<likelihood<: 8, with walks during
<Severity<: 9, people do get whacked
<Mitigation Actions and costs:<
- -7, look for dark clouds. but Can’t see well enough in all directions over mountains
- 0, Listen for distant thunder, also golf course warning sirens
- -1, check CNN and weather channels, but hard to find channel with low accessibility remote and cable box, also reading banners and warning screens not always announced. FIND RELIABLE, USABLE WEATHER CHANNEL, ADD TO FAVORITES
- Ditto for Internet weather information, but I never am sure I am on a reliable up-to-date website or stream, especially if ad supported
- Ditto for Radio, using emergency receiver. ACTION: set up and learn to use.
- For ice patches, choose most level route, beware of ice near bushes where sunlight doesn’t reach for days after a storm, walk and observe during afternoon melting rather than before dusk freezing
Summary: I should keep emergency radio out and tuned to a station. ACTION needed for other threats than weather, also.
Event: Trip over something
<Indicators<: Stumbling, breaking stride, wary passers-by
<Mitigation Actions and costs:<
- 0, Follow well-defined, familiar route with smooth pavements, rounded curbs – I DO THIS!
- Never take a short cut or unpaved path.
- $100, wear SAS walking shoes with Velcro tabs, NO SHOE LACES to trip over
- 0, detour around walkers with known or suspected pets on leashes, also with running kids or strollers.
- 0, take deliberate steps up and down curbs, use curb cuts where available. Remember that gutters below curbs often slope or are uneven. Don’t be sensitive that people are watching you “fondle the curb”.
- Detour around construction sites, gravel deliveries, … Extra caution on big item trash pickup days when items might protrude from trash at body or head level.
- Detour around bushes growing out over sidewalks, avoiding bush runners, also snakes (yikes)
Summary: I feel safe from tripping now that I have eliminated shoe laces and learned, the hard way, not to take curbs for granted.
Event: Hit by some vehicle
<Indicators<: Movement, perhaps in peripheral vision; noise
<Mitigation Actions and costs:<
- 0, stay on sidewalks, if not overgrown by brush
- 1, walk when others are out and about, expecting auto and bicycle drivers to be aware
- find a safe, regular road crossing, away from an irregular intersection, and jay walk. Is this wise?
- Do not walk at times of day when sun may blind drivers, e.g. winter days when sunsets are long and low
- Do not trust ears. Bicycles are quiet on smooth pavements, move rapidly down hill. Also hybrid cars may run silently.
- Halt completely when in the vicinity of noisy delivery trucks or car radios. Blending hearing and seeing requires both be at maximum capacity.
- Remember that eerie white cross memorial indicating a dangerous intersection with cars coming around a blind curve and often running stop sign. Also shout at speeders and careless drivers.
- REJECTED: Use white cane to warn others I’m limited at seeing them. I don’t think the white cane adds more warning than my active body motion.
Summary: I am currently using 3 safe routes, must not let mind wander at each intersection and crossing. ACTION: sign a petition for noise indicators on silent motors.
Event: Getting lost
<Indicators<Unfamiliar houses, pavements, in intersections
<Mitigation Actions and costs:<
- Follow same routes through established neighborhoods
- $1000, get GPS units and training. Consider when I move and need to define new walking routes.
- Beware or boredom to tempt alternate routes.
Summary: I used to get lost, turned around in neighborhoods, no longer take those excursions. 3 regular walking paths will do.
Event: Cardiac attack
<Indicators<: frequent stops, pain, heavy breathing
<likelihood<: Hey, that’s why I do these walks, to build breathing stamina at an altitude of 5000 ft with several serious up and down hill stretches.
<Severity<: Something’s gonna get me, hope it’s quick.
<Mitigation Actions and costs:<
- Exercise regularly to maintain condition.
- Checkup when Medicare allows and physicians are available (thanks U.S. health care system)
Summary: Not to worry as long as walks feel good.
Risk Management Summary
I choose this walk as my primary exercise activity, have integrated it into my daily routine, and generally feel better as well as safe. Eliminating shoe laces removed a major stupid cause of minor stumbling and potential falls. I have avoided unsafe and confusing trajectories. My main fears are: Fedex or UPS delivery trucks, fast downhill bikes, pet greetings, loose children, persistent brush-hidden ice patches. My cane would, in this environment, change attention from moving objects toward pavement which is smooth and uncluttered. The cane would do little to warn off threats — they either notice me or not. I choose to balance my partial sight used cautiously with improving listening skills and opt to walk faster and more comfortably without the leading cane and its frequent catches in cracks and grass.
Actions: While walking may not be the main reasons, I must gear up with that emergency radio for other threats. More generally, I must learn about emergency information sources that fit my vision capabilities.
References on Risks
- Wikipedia on Risk
- How to for risk management
- Risks to the public using software, decades of examples of software-related events and management as risks
- ‘Nothing is as Simple’ blog, a phrase to remember and examples
- Previous post on Literacy and reading charts, how I discovered I couldn’t read pie chart data
- Previous Post ‘Grabbing my Identity Cane to Join the Culture of Disability’. I have now progressed through orientation and mobility training to using a longer cane with a rolling tip.
- Emergency preparedness checklists for Vision Losers — TBD
This post speculates about alternative changed futures for accessibility, such as cost-busting open source developments; self-voicing interactions; over riding inaaccessibilityty by proxy web servers; a screenless, voiced, menu-driven PDA; and higher level software design practices.
An mp3 Youtube converter converted me!
First, I digress to tell you about a cool utility that invoked the serendipity behind this posting. Blind Cool Tech has a podcast, Jan. 1 2008, on a “You tube to iPod converter”. I haven’t used Youtube.com much since the videos appear to my partial sight as white blobs with some hand waving going on. Last week, I began to rethink my intellectual aversion to mindless drivel I feared populated Youtube and affronted my blindness sensibilities. The NYTimes had a piece on “Big Think”, a Youtube for eggheads that promised a variety of magazine-style videos of the ilk that interested me, namely politics and economics, reminiscent of the university-based video series at research
Wow, this little piece of software Youtube to iPod converter really delivers and opened up a new way for me to get useful web information. The use case is: copy the URL for a video that interests you, the link you would click to invoke the viewer; paste the link into the accessible converter; choose a file name and location; choose the format type mp3; click “download and convert”; wait a while; listen to the mp3 or your PC or send it on to a digital player, in my case my Bookport from aph.org. With a bit of imagination and patience, you can mentally fill in the video and also have a version to replay or bookmark. Moral of this digression: once again podcasts from the blind community open new worlds for us new vision losers needing accessible software to stay in the mainstream. Thank you, blind cool tech podcaster Brandon Heinrich! Check out my page of Youtube converted videos on eyesight-related topics.
Youtube video on WebAnywhere Reader
By sheer luck, the first You Tube search I chose was the term “screen reader” and it turned up a provocative demo and discussion:
University of Washington Research: Screen Reader in a Browser by Professor Richard Ladner and graduate student Jeffrey P Bigham in the Web Insight project at cs.washingting .edu
Briefly, this experimental work addresses the problems of costly screen readers and the need for on-the-fly retrieval of web information by blind users away from their familiar screen readers. The proposed solution is a browser adaptation adding a script that redirects web pages to a so-called proxy server that converts the structure of the page, known as its document object, to text and descriptions that are returned to the browser as speech. This is pretty much what a desktop screen reader does, only now the reader and speech functions are remote. Of course, there are a gazillion problems and limits to this architecture but it appears to work sufficiently reliably and rapidly to achieve the social goals of its name, “Web Anywhere”. This research project, funded by the National Science Foundation, has also used the above architecture to modify web pages to add ALT tags from link texts, OCR of the image, and social networking tagging of images. Not only is the technology very clever, but also the work is based on observations of how blind users use the web and on a growing appreciation of the complexity and often atrocious design of web pages and use of AJAX technology that frustrate visually impaired web users, no matter the power of their screen readers or magnifiers or their skills.
As a former employee of funding agency NSF, a reviewer of dozens of proposals, a Principal Investigator in my sighted days on Computer Security education using animation, let me tell you this U. Washington project is a great investment of taxpayer funds. The work is innovative, well portrayed for outreach at at webinsight.cs.washington.edu, addressing monumentally important global and social issues, and helping to bring about a better educated and motivated generation of developers and technology advocates on accessibility issues.
Now, is this proxy-based architecture the killer app for web accessibility? Possibly, with widespread support of IT departments and developers, but the project sets it goals more modestly as “Web Everywhere” for transient web uses and possibly more broadly to address the cost of current screen reader solutions. Maybe the proxy-based approach can be expanded to other uses in demonstrations and experiments on a range of accessibility problems.
Will free screen readers shake up the rehab industrial world? My pick is NVDA
In one sense, a no-cost screen reader provides a way of breaking up the current market hierarchy, which one might unfortunately describe as a cartel of disability vendors and service providers. Yes, the premier screen readers sell for $1000 which seems justifiable by the relatively small market, the few million U.S. and international English-speaking PC users who are blind and on the rehab grid. Some, like Blind Confidential blogger, blink, and industry insider suggest the assistive technology industry is doing fine financially, able to afford more R&D and QA, and attractive to foreign investors. Like any segment of the computer industry, buyers become comfortable with the licensing, personalities, training, upgrade policies, and help lines so therefore resist change. In the case of the $1k products, buyers are more likely not individuals but rather rehabilitation and disability organizations with a mandate to provide user support through a chain of trained technical, health, and pedagogical professionals. A screen reader like the NVDA nonVisual Desktop Access from NVACCESS.org will challenge this industry segment as more users find it suitable for their needs, as I have written about in“Look ma, no screens! NVDA is my reader” posting . With broader acceptance of open source as a reliable and effective mode of software enterprise, as nvda co-develops with other flexible open source office and browser products, as energetic developers fan out to other accessibility projects, well, nvda might well be the killer app of cost and evolution.
Should apps depend on screen readers or be self-voicing?
However, in a more radical sense, I argue that the screen reader model itself is badly flawed and that also technical accessibility alone is inadequate to resolve the needs of blind web users.
The value of a universal screen reader is that it can do something useful for most applications by dredging out fundamental information flowing through the operating system about an application’s controls and its users’ actions. But another model of software is so-called “self voicing” where the application maintains a focus system that tracks the user’s actions and provides its own reactions through a “speech channel”, providing at least equivalent information to an external screen reader. Such a model can do even better by providing flexible information about the context of a user event and preferences. A button might respond upon focus with “Delete”, or “Delete the marked podcasts in the table”, or repeat the relevant section of the user manual, or elaborate a description of the use case, such as “first, mark the podcasts to delete, and here’s how to mark, then press this button, and confirm the deletions, after which the podcast files will be off your disk unless you download them by another name”. Self-voicing as speech technology is implemented by many applications that allow choice of voice, setting speed, and even variation of voices matched to uses, e.g. the original message in an e-mail reply. More significantly, self-voicing puts the responsibility for usability of the application directly on a developer to provide consistent, coherent, and useful explanations of each possible user interaction. Further, this information is useful both to the end user and to testing professionals who can check that the operation is doing what it says, only what it should, and in the proper context of the application’s use cases. Ditto, a tech writer working with a developer can make an application far more usable and maintainable in the long run. So, we claim, that a kind of killer app development practice would be the shift of responsibility away from screen readers onto self-voicing applications, including operating systems, where development processes will be improved. We base our claims on personal experience developing a self-voicing podcatcher, @Podder, for partially sighted users using a speech channel of copying text to the clipboard to be read by external text-to-speech applications. Another self-voicing application is Kurzweil 1000 for scanning and document management, and employing the nicest spell checker around.
Can overcoming missing and muddled use cases conquer inaccessibility?
We have argued in “Are missing, muddled use cases the cause of web inaccessibility?” posting that the main culprit in web usability is not technical accessibility but the way use cases are represented, tangled, and obscured by links as well as graphics and widgets on web pages. A use case describes a sequence of actions performed to meet a specific goal, such as “register on a web site” or “archive e-mail messages”. Use cases not only lay out actions but also provide the rationale, the consequences, constraints, and error recovery procedures for interactions. Our claim is that software developers, both desktop and web application developers, force all users, sighted or blind, to infer the use cases from the page contents and layouts, often embellished with links, such as blog rolls, to enhance social interaction and increase search engine rankings. Reports such as those from the Web Insight project and the Nielsen Norman report “Beyond ALT text” describe in gory detail the frustrations and failures of visually impaired users struggling with their screen readers and magnifiers and braille displays to overcome the practice of poor use case representation as they try to keep up with sighted users in gaining information from and performing consumerism within the constellation of current web sites. While I certainly believe that web accessibility activists are important to removing barriers and biases, the larger improvement will come when web sites are designed and clearly presented to achieve their use cases, for the benefit of all those who gain from better web site usage. This is already occurring with re-engineering for mobile devices where failure to activate a use case or have available the appropriate use case is especially apparent, and, seemingly, not really that hard to achieve.
How will mobile devices improve accessibility?
Finally, what about the marvelous mobile devices such as the fully voiced, menu-driven Levelstar Icon and APH Braille Plus Mobile Manager? After 8 months of Icon addiction, I firmly believe that, cost aside, this form of computer is far superior to conventional Internet usage for the activities it supports, mainly e-mail, RSS management, browsing, and access to Bookshare.org resources. for example, I can consume the news I want in about an hour from NY Times, Washington Post, Wall Street Journal, Arizona Republic, CNN, InsiderHigherEd, CNET, and a host of blogs. And that’s BEFORE getting up in the morning. No more waiting for web pages to load on a news web site, browsing through categories on information that don’t interest me, and bypassing advertisements. Additionally, I am surprised at how often I use the Icon’s “Mighty Mo” embedded browser by wireless rather than open up the laptop to bring up Firefox and fend off all my update anxious packages and firewall warnings. Yes, life with the Icon is “living big”. the Icon is mainly part of the trend toward phones and wireless devices, but just happens to be developed by people who know what visually impaired users need and want.
Maybe, somewhere out there is a wondrous software package that will dramatically boos the productivity and comfort of visually impaired computer users. With some assurance, we can recognize an upcoming generation of open source oriented developers seasoned by traditional assistive technology and adept at both project organization and current software tools. Funders and support organizations can look ahead to utilization of their innovations and improvements. But maybe the core problem is much harder, as we claim, a disconnect in “computational thinking” between software designers who have found their way through models and user-oriented analysis and those web designers stuck at the token and speechless GUI level of browsers and web pages. Empirical researchers on accessibility are starting to witness and understand the fragility of users caught between artifacts designed for sighted users and clumsy, superhuman emulating tools such as screen readers and magnifiers while the proper responsibility for accessibility falls on developers who have yet to appreciate the power of readily available speech channels along side graphical user interfaces.
What do others think? Is their a “killer app” for accessibility? Comment on this blog at https://asyourworldchanges.wordpress.com, “As Your World Changes” blog or e-mail to firstname.lastname@example.org.