Category Archives: Interface

iPhone 4S: speech advances but there’s more to do

By Kimberly Patch

Apple’s iPhone 4S has taken a couple of nice big steps toward adding practical speech to smart phones. There are still some big gaps, mind you. I’ll get to those as well.

Speech on the keyboard

The long-awaited speech button is now part of the keyboard. Everywhere there’s a keyboard you can dictate rather than type. This is far better than having to use an app to dictate, then cut and paste into applications. This is one of the big steps. This will make life much easier for people who have trouble using the keyboard. And I suspect a large contingent of others will find themselves dictating into the iPhone a good amount of time, increasingly reserving the keyboard for situations where they don’t want to be overheard.

The key question about speech on the keyboard is how it works beyond the letter keys and straight dictation.
For instance, after you type
“Great! I’ll meet you at the usual place (pool cue at the ready) at 6:30.”
how easy is it to change what you said to something like this?
“Excellent :-) I’ll meet you at the usual place (pool cue at the ready) at 7:00.”
And then how easy is it to go back to the original if you change your mind again?

Speech assistant

After we all use the speech assistant for a couple of days or weeks it‘ll become readily apparent where Siri lies on the very-useful-to-very-annoying continuum.

The key parameters are
– how much time Siri saves you
– how a particular type of Siri audio feedback hits you the10th time you’ve heard it
– how physically and cognitively easy it is to switch between the assistant and whatever you have to do with your hands on the phone.

One thing that has the potential to tame the annoyance factor is giving users some control over the feedback.

I think the tricky thing about computer-human feedback is it’s inherently different from human-human feedback. One difference is the computer has no feelings and we know that. Good computer-human feedback isn’t necessarily the same as good human-human feedback.

The big gap

There’s still a big speech gap on the iPhone. Speech is still just a partial interface.

Picture sitting in an office with a desktop computer and a human assistant. Type anything you want using the letter keys on your keyboard or ask the assistant to do things for you. You could get a fair amount of work done this way, but there’d still be situations where you’d want to control your computer directly using keyboard shortcuts, arrow keys or the mouse. Partial interfaces have a high annoyance factor.

Even if you use a mix of speech, keyboard and gesture, if you’re able to choose the method of input based on what you want to do rather than what happens to be available, true efficiencies will emerge.

Ultimately, I want to be able to completely control my phone by speech. And I suspect if we figure out how to do that, then make it available for everyone, the general mix of input will become more efficient.

I’d like to see the computer industry tap folks who have to use speech recognition as testers. I think this would push speech input into practical use more quickly and cut out some of the annoyance-factor growing pains.

What do you think? Let me know at Kim@ this domain name.

Discover, Adjust, Organize and Share

By Kimberly Patch

Keyboard shortcuts have a lot of potential. They’re fast.

For example, cutting and pasting by

– Hitting “Control x”
– Moving the cursor to the paste location
– Then hitting “Control v”

is speedier than

– Moving the mouse to the “Edit” menu
– Clicking “Edit ”
– Clicking “Cut”
– Moving the cursor to the paste location
– Moving back up to click “Edit ”
– Then clicking “Paste”.

Add this up over many tasks and you have a big difference in productivity.

So why don’t we see more people using keyboard shortcuts?

Ask someone who uses the mouse for just about everything and you’re likely to get a compelling answer — it’s easier. And it is — it’s cognitively easier to choose a menu item than to remember a shortcut.

Given a choice, people generally do what’s easier. On a couple different occasions I’ve heard  people say that, all else being equal, they’d hire a blind programmer over a sighted one because the blind programmer is faster. The blind programmer must use keyboard shortcuts.

This is a common theme  — we have something potentially better, but human behavior stands in the way of adoption.

In the case of keyboard shortcuts there’s a little more to the story, however.

As a software community we haven’t implemented keyboard shortcuts well.

Many folks know keyboard shortcuts for a few very common actions like cut, paste and bold, but it’s more difficult to come up with keyboard shortcuts for actions like adding a link or a hanging indent because they are used less often and are less likely to be the same across programs.

So the user is often stuck with different shortcuts for the same tasks in different programs, requiring him to memorize and keep track of multiple sets of controls. This is cognitively difficult for everyone, and more so for some disabled populations and the elderly.

This type of implementation is akin to asking someone to speak different languages depending on who they are speaking to. Depending on how motivated and talented they are, some folks may be able to do it, but not many. And if there’s an easier way, even those capable of doing it either way will often choose easier even if it’s less efficient.

So we aren’t letting keyboard shortcuts live up to their potential.

There’s a second keyboard shortcuts issue that’s getting worse as Web apps become more prevalent: clashing shortcuts. If you hit “Control f” in a Google document, do you get the Google Find facility or the browser Find facility? Go ahead and try it out. It’s messy.

This is already an issue in the assistive technology community, where people who require alternate input or output must use software that runs all the time in conjunction with everything else. For example, a speech engine must be on all the time listening for commands, and screen magnifier software must be running all the time to enlarge whatever you’re working in.

So there are two problems: keyboard shortcuts aren’t living up to their potential to increase efficiency, and, especially on the Web, keyboard shortcuts are increasingly likely to clash.

I think there’s a good answer to both problems: a cross-program facility to easily discover, adjust, organize and share shortcuts.

– We need to easily discover shortcuts in order to see them all at once so we can see patterns across programs and conflicts in programs/apps that may be opened at once.

– We need to easily adjust shortcuts so we can choose common shortcuts and avoid clashes. We need to organize so we can remember what we did.

– We need to easily arrange commands and add headings so we can find commands quickly and over time build a good mental map of commands.. Lack of ability to organize is the Achilles’ heel of many macro facilities. It’s like asking people to play cards without being able to rearrange the cards in their hand. It’s possible, but unless there’s a reason for it, makes things unnecessarily difficult.

– We need to share the adjustments because it makes us much more efficient as a community. My friend Dan, for instance, is very logical. He uses many of the same programs I do, and we both use speech input. So if there were a facility to discover, adjust, organize and share keyboard shortcuts, I’d look to see if Dan had posted his changes, and I would adjust to my needs from there.

The organizing and sharing parts are the most important, because they allow for crowdsourcing.

Over the past few decades the computer interface ecosystem has shifted from single, unrelated programs to separate programs that share information, to programs so integrated that users may not know when they are going from one to another. This has increased ease-of-use and efficiency but at the same time complicated program control.

At the same time programs have grown more sophisticated. There’s a lot of wasted potential in untapped features.

If we give users the tools to discover, adjust, organize and share, I bet we’ll see an increase in speed and efficiency and an uptick in people discovering nifty new program features.

Tip: Not my mistake

One thing that the Dragon NaturallySpeaking speech engine could do better is hyphenation. I don’t mind so much when I say something that should be hyphenated and it’s not. I can always say the NaturallySpeaking command “hyphenate that” or the UC command “1-10 Hyphenate” after the fact if the NaturallySpeaking engine leaves out the hyphenation. I can also specify hyphenation when I want it, e.g. “on hyphen the hyphen fly” will type “on-the-fly”.

If I have something that’s not hyphenated and should be, it’s either a mistake or something I accidentally left out.

But if NaturallySpeaking puts in hyphenation where I don’t want it, there are two problems. First, there’s not an easy way to remove hyphenation after the fact — I have to select the phrase, then say it again in two phrases so it won’t be hyphenated, which is 3 steps. Second, there’s no way to specify no hyphenation.

If NaturallySpeaking over-hyphenates and I don’t notice, it looks like I’m consciously adding hyphens where they shouldn’t be. There’s nothing more annoying than having another entity introduce mistakes into your work.

Because the minuses of over-hyphenation are larger than the minuses of not hyphenating enough, when I see a phrase hyphenated when it’s not supposed to be I remove the hyphenated version from Natspeak Vocabulary so it won’t happen again.

For instance, I removed “follow-up”, which I often put as a stand-alone tag in my todo list. It’s a clunky workaround, but it’ll have to do until speech engines get better at analyzing hyphenation.

To remove a vocabulary word say “NatSpeak Vocabulary”, say the or phrase you want to delete, “Under d c” to delete and close the window, and “Enter” to confirm the change.

I think Nuance could mitigate this problem with a pair of in-line commands: “no-hyphen that” would remove hyphenation in the last phrase and “no-hyphen” would specify that something not be hyphenated, parallel to the “no-caps” command. I’m adding this to the Nuance wish list.

Tip: Rudolf Noe’s Customize Your Web

Rudolf Noe, creator of the Mouseless Browsing add-on, is beta testing a new add-on that gives nonprogrammers extensive control of the Web.Noe’s Customize Your Web Firefox add-on allows you to specify that certain things happen every time a given webpage comes up. You can control where the focus is, click a button automatically, change how webpage elements look, and even change how they’re arranged on the page. Customize Your Web also contains a macro facility that allows you to attach keystrokes to elements on a given webpage. The key thing about the extension is it provides extensive control without having to program.Two of the simplest abilities — controlling where the focus is and clicking buttons, are fairly easy to implement. The focus ability lets you, for instance, open the Google Documents Site with the focus in the search bar. The click ability allows you to automatically login to any site.

To set up a focus change or button click on a webpage you go to that webpage, click the tiny Configure Your Web button in the bottom right corner of the screen right above the toolbar, click the element you want to affect, choose an action, then save what you have done.

You can name a Mouse Touch to click the Customize your Web button (see UC Lesson 10.24).

With just a little more effort you can specify keystrokes to do things like going down one search result, or click “Previous” or “Next” at the bottom of a search page.

If you assign the up and down arrows to go up and down by search result in a Google search, and Enter to click a selected result, you can then use the Utter Command speech command “3 Down · Enter”, for instance, to open the third search result down.

Noe’s video shows you how to use the extension in detail.

Also see UC Exchange page UCandFirefox.

Have you found Firefox or Thunderbird add-ons that make things easier when you’re using speech? Tell me about them – reply here or let me know at info@ this website address.

Tip: Rudolf Noe’s Customize Your Web



Rudolf Noe, creator of the Mouseless Browsing add-on, is beta testing a new add-on that gives nonprogrammers extensive control of the Web.Noe’s Customize Your Web Firefox add-on allows you to specify that certain things happen every time a given webpage comes up. You can control where the focus is, click a button automatically, change how webpage elements look, and even change how they’re arranged on the page. Customize Your Web also contains a macro facility that allows you to attach keystrokes to elements on a given webpage. The key thing about the extension is it provides extensive control without having to program.Two of the simplest abilities — controlling where the focus is and clicking buttons, are fairly easy to implement. The focus ability lets you, for instance, open the Google Documents Site with the focus in the search bar. The click ability allows you to automatically login to any site.

To set up a focus change or button click on a webpage you go to that webpage, click the tiny Configure Your Web button in the bottom right corner of the screen right above the toolbar, click the element you want to affect, choose an action, then save what you have done.

You can name a Mouse Touch to click the Customize your Web button (see UC Lesson 10.24).

With just a little more effort you can specify keystrokes to do things like going down one search result, or click “Previous” or “Next” at the bottom of a search page.

If you assign the up and down arrows to go up and down by search result in a Google search, and Enter to click a selected result, you can then use the Utter Command speech command “3 Down · Enter”, for instance, to open the third search result down.

Noe’s video shows you how to use the extension in detail.

Also see UC Exchange page UCandFirefox.

Have you found Firefox or Thunderbird add-ons that make things easier when you’re using speech? Tell me about them – reply here or let me know at info@ this website address.

Dealing with the Office 2007 ribbon


I’ve been getting a lot of questions lately about Microsoft Office 2007 versus Microsoft Office 2003.

My stock answer is I prefer the 2003 drop-down menus to the 2007 ribbon. It’s funny, at the same time as Office made the switch from drop-down menus to the more Web-like ribbon, the Web application Google Documents made the opposite move — changing from a tab-based interface to drop-down menus. Out of the box, 2007 is less efficient — it takes up more screen space and requires more steps than 2003.

Having said that, the 2007 interface is also very configurable. You can put any drop-down menu or menu item on the Quick Access Toolbar that runs across the very top of the screen. And you can hide the ribbon. If you take the time to put the items you use most on the Quick Access Toolbar, you can make Office 2007 much more accessible.

For details on setting things up and using Microsoft Office 2007 with Utter Command, see UCExchange: UCandOffice2007 .

What’s your opinion on 2007 versus 2003? Reply here or let me know at info@ this website address.

Research Watch: What you see changes what you hear


Who says looks don’t matter?

It looks like what you see changes what you hear. Researchers from Haskins Laboratories and MIT have found that different facial expressions alter the sounds we hear.

This shows that the somatosensory system — the mix of senses and brain filtering that determines how you perceive your body — is involved when you process speech.

This doesn’t have a whole lot to do with speech commands except to show that it’s easy to underestimate the complexity, and subtlety, of our perception of spoken language.

Resources:

Somatosensory function in speech perception
www.pnas.org/cgi/doi/10.1073/pnas.0810063106

Keyboard shortcuts: naming, sharing and seeing

3

The way we control computer programs is fairly inefficient.

Keyboard shortcuts are underused in favor of using a mouse to click through menus. This is short-term practical — it takes less thought to browse through menus than to remember a keyboard shortcut. But it’s not very productive.

Look at the whole picture and you find good reasons to make the less productive choice. There are barriers to using keyboard shortcuts. Help and learning tools for keyboard shortcuts are scant at best. And inconsistencies across programs make the learning task larger.

So how do we improve things?

We can (continue to) encourage software makers to improve keyboard shortcut documentation and consistency. This is important, but it’s not going to change the world.

I think things would improve greatly given universal abilities to

1. name our own keyboard shortcuts — this currently exists in some but not all programs
2. share sets of shortcuts
3. see all shortcuts for a given application, and even compare shortcuts across applications

This would provide a good mental map of functionality — both of individual programs and across the landscape of suites.

It would make efficient functionality accessible across the board. It would enable individuals, organizations, departments or corporations to make applications more efficient and even standardize shortcuts across applications. The ability to share shortcuts would put a lot of brains on the problem and make the process efficient and evolutionary.

Given a map of all shortcuts, you could make things even better by allowing the user to mark the map — maybe using color labels.

Tools like this are the equivalent of a downhill groove for water– it would make it easy to be more efficient.

Keyboard shortcuts are least standard and most lacking in Internet applications. I’m thinking an ability like this could be built into or be an add-on to a browser.

And in addition to increasing productivity across the board, keyboard shortcuts are central to accessibility. The blind community relies on keyboard shortcuts. And speech commands often tap keyboard shortcuts — they’re often the hooks people use to write custom macros, and Utter Command allows you to speak and combine keys including keyboard shortcuts.

So who’s going to step up to the plate?

Gravity on the Web

3

Computer commands of all kinds — speech, keyboard and mouse — are much easier to use when they’re consistent across programs.

At the base level, it’s important that common elements like drop-down menus act the same. You control drop-down menus without thinking — click on an element or use the Left, Right, Up, Down and Enter keys.

Consistent commands are the real-world equivalent of having the same gravity in every room, or keys turning the same way to unlock.

Web applications are looking more and more like standard computer programs, but sometimes the elements that look familiar don’t act the way we’re used to. Drop-down menus usually respond in a familiar way to the mouse, but often don’t respond to the Up, Down and Enter keys.

But perhaps things are getting better.

The first drop-down menus to show up on Google Docs didn’t respond to Left, Right, Up, Down and Enter. Then most of the folder-view drop-down menus were arrow key/Enter enabled, but not document menus. A few months ago document menus changed from looking tab-like to looking more menu like, but still didn’t respond to arrow keys and Enter. Then, sometime in the last few weeks, the Doc menus were arrow key/Enter enabled (the change didn’t show up on the update notice).

The keyboard shortcuts enable better speech navigation as well. I can say, for instance, “3 Down Enter” to choose an item in an open menu, “3 Down 2 Right Enter” to choose a color on the open color menu, or “7 Right Wait 3” to take a three-second peak at each of the seven successive menus starting with the file menu open.

This is a great trend.

Now all we need is keyboard shortcuts to open the menus in the first place. We also need the same kind of control in all Web applications, including Google spreadsheets.

Friday Tip: Remembering boilerplate and vocabulary commands

3

NatSpeak boilerplate Text and Graphics commands allow you to insert any text or graphics into a document using a single speech command. These commands can be very powerful — they’re good for adding text and graphics that you use often, such as your address or a set of directions.

The NatSpeak Vocabulary editor allows you to add words or phrases to your vocabulary that have different spoken and written forms. This allows you to make words like your email address easily pronounceable.

The key to using boilerplate and vocabulary commands is being able to remember them.

There are two ways to make these types of commands easy to remember:

1. Word them consistently

2. Make them easy to look up

I find the easiest way to remember boilerplate Text and Graphics commands is to simply say the first part of the text you’re inserting followed by “Full”. So “Redstart Full” prints the full name and address of Redstart Systems. If you have two different versions of the address, add a number. “Redstart Full 1” prints the same address in a different format.

You can use the Utter Command Clipboard facility to make anything easy to look up. Once you name your Text and Graphics command say “Line Copy To” followed by the name of the UC Clipboard file and you’ve got it recorded. For example, to keep your boilerplate commands in “UC List 1” say “Line Copy To List 1”.

Now any time you want to consult your list of commands say “List 1 File”. You can also print it out.

I also use the start-to-say method for vocabulary words that have different written and spoken forms. I’ve put my Redstart email address in as a vocabulary word with the spoken form “Kim at Red” and my Gmail address in as a vocabulary word with the spoken form “Kim at G Mail” (in address commands I use “Kim” whether or not the actual address is just Kim or something longer).

One caution in using vocabulary in this way — make sure commands are at least two words and make sure the two words are not a common phrase that you’d want to say as is. If you need to, use the “Full” method above to avoid this problem. Also make sure to save your user after adding vocabulary words.

If you wish, keep vocabulary words that have different written and spoken forms on the same list as your boilerplate commands.

The difference between boilerplate commands and written/spoken vocabulary words is a block of boilerplate is returned exactly as written, while vocabulary commands are treated like words, with appropriate spacing before and after them.

UC Commands Tip: say “NatSpeak” followed by the first one or two words in a NatSpeak dialog box title to call up that dialog box.

Commands for the dialog boxes mentioned above:

“NatSpeak My Commands” calls up the NatSpeak My Commands dialog box where you can write a boilerplate Text and Graphics macro

“NatSpeak Vocabulary” calls up the NatSpeak Vocabulary Editor dialog box