Category Archives: Patch on Speech

Thunderbird tabs and consistency

Thunderbird now has tabs for open messages, which is very convenient. You can have three messages open and see where they are from the tabs — this is similar to tabbed browsing in programs like Firefox and Internet Explorer. And you can move among tabs using the same commands you use to move among tabs in your browser: “Tab Back”, “Tab Forward” and “1-20 Tab Back/Forward”.

Unfortunately, however, the keyboard shortcut to close a message tab is different from the standard close document/tab command used in most programs including Firefox, even though Thunderbird is developed by the same organization as Firefox. The usual command “Control Function 4” logically mirrors the common “Alternate Function 4” that’s used to close a window.

If the standard keyboard shortcut were enabled like it is in programs like Microsoft Word and Firefox, you could say the shortcut or “Document Close” to close a document or tab. And if you wanted to close more than one you could say “Document Close Times 3”, for instance.

If you dig through the keyboard shortcuts for Thunderbird, you’ll find that there is a nonstandard keyboard shortcut to close a message tab: “Control w”. So you can train yourself to say “Control w” to close a message when you’re in Thunderbird. Also keep in mind you can also say “Control w Times 3” to close three open messages. But it would be far better to not have to think about which program you are in when closing a tab or document. Feel free to complain to Thunderbird about this oversight at the Thunderbird support forum.

Here’s another Thunderbird tip: If you want to move a message rather than just closing it try “Move Recent”, “1-10 Down Enter”.
There’s more Thunderbird strategy on the Redstart Wiki: http://redstartsystems.com/Wikka/wikka.php?wakka=UCandThunderbird

What’s in a name? Lots.

I get a lot of inquiries from people who are confused about the Dragon speech engine’s many names, and also the name of the company that owns it.

Here’s a brief history:

The Dragon speech engine has changed hands twice, but the name of the company owning it has changed three times.
In the beginning Dragon Systems created the DragonDictate speech engine. Also in the beginning several other companies also created programs that let you speak to a computer: Kurzweil Applied Intelligence, Lernout & Hauspie, IBM and Philips. These early speech engines all required you to pause between words. This was a somewhat frustrating way to dictate and was hard on your voice.

Dragon, Lernout & Hauspie, IBM and Philips eventually improved their speech engines so you could dictate in phrases. When Dragon Systems brought out continuous speech recognition, it changed the name of its product to Dragon NaturallySpeaking. Dragon NaturallySpeaking generally worked better for dictation than DragonDictate.

People who were trying to use Dragon NaturallySpeaking hands-free, however, found that Dragon NaturallySpeaking lacked some of the DragonDictate features. Some of us who needed hands-free speech input used a combination of DragonDictate and Dragon NaturallySpeaking for years. (For me it was until NaturallySpeaking 3.5 came out. There are still a couple of features that were in the old DragonDictate that haven’t made it into Dragon NaturallySpeaking. The one I miss the most is the ability to go straight to a macro script from the recognition dialog box where you could see what Dragon had heard.) So DragonDictate was used and talked about long after development stopped.

Just before Dragon NaturallySpeaking version 5 came out Dragon Systems was sold to Lernout & Hauspie, makers of rival speech engine VoiceXpress Pro. NaturallySpeaking 6 was a merger of the products, keeping the NaturallySpeaking name and most of the look and feel (with the notable exception of the macro creation facility). When Lernout & Hauspie famously melted down, the Lernout & Hauspie speech assets were sold to ScanSoft, a company that started with optical scanning recognition technology acquired from Xerox, who acquired it by buying Kurzweil Computer Products, Inc., one of several companies started by Ray Kurzweil. (The Lernout & Hauspie speech assets also included the Kurzweil Voice speech engine, which Lernout & Hauspie had acquired by buying Kurzweil Applied Intelligence, another company started by Ray Kurzweil.)

Just before ScanSoft acquired Dragon, they’d signed a 10-year deal with IBM to market IBM’s ViaVoice, which by then included PC and Mac versions. After the ScanSoft acquisition there were no more new ViaVoice products. Over the next few years ScanSoft acquired many more speech-related companies including Nuance. After the Nuance acquisition, ScanSoft switched its name to Nuance. Some people refer to the old Nuance as blue Nuance and the current Nuance as green Nuance. (This was the second name change for ScanSoft. It was founded in 1992 as Visioneer.)

This year, Nuance created an iPhone app named Dragon Dictation — name sound familiar?

Also this year Nuance bought MacSpeech. There’s some name history here too. MacSpeech’s original speech engine for the Mac, iListen, was based on Philips FreeSpeech2000 speech engine. MacSpeech changed its product name to match the company name after signing an initial deal with Nuance in early 2008 to use the Dragon NaturallySpeaking engine. (Later in 2008 Nuance bought Philips Speech Recognition Systems.) After buying MacSpeech Nuance renamed the speech engine product to Dragon Dictate for Mac. Name sound familiar? The old DragonDictate had no space between words. The new Dragon Dictate is two separate words.

OK. Got that all straight? There’s a little more nitty-gritty. The Dragon NaturallySpeaking product line includes a basic version, middle version, professional version, legal version and medical version. The professional, legal and medical versions all originally had the “Dragon NaturallySpeaking” first and middle names, but somewhere along the line the legal and medical versions lost NaturallySpeaking, becoming Dragon Legal, and Dragon Medical.

Meanwhile, the basic version and middle versions have recently changed names. The basic version has in the past gone by “standard” but is currently “home”. The middle version has in the past gone by “preferred” but is currently “premium”. There’s also a sub-basic version not usually sold by resellers that can be found in retail stores usually around Christmastime named Dragon NaturallySpeaking Essentials.

One last thing. I’m not sure where Dragon Speak came from. I’ve heard many people refer to Dragon NaturallySpeaking as Dragon Speak, but that’s never been an official name — so far.

So — I hope that clears everything up.

Utter Command has always been named Utter Command — just saying.

Tip: Easier scrolling with mouse-speech combination

If you use a mouse to scroll, have you noticed how much fine motor control you use to keep the arrow on the scroll bar as you move the page? You’re doing a fair bit of work to do this. It’s akin to keeping on a balance beam.

If you can move your mouse, you can use an Utter Command touch/speech combination that’ll show you just how hard you have to work to use just the mouse to control the scroll bar.

Next time you use the mouse to scroll, place the mouse arrow on the scroll bar, then say “Touch Hold”. This command holds down the left mouse button. Now you can scroll by simply moving the mouse up and down. There’s no need to click, and there’s no need to keep inside the narrow confines of the width of the scroll bar. This command is especially effective when you’re reading and can leave the left mouse button down between moves. It’s also especially effective when you’re skimming quickly through a document — you can concentrate more on what you’re reading because there’s no need to take your eyes off it to make sure the mouse is on the scroll bar. When you’re done using this command make sure to release the mouse button: “Touch Release”.

You can use the same method in a drawing program to draw without having to have the pen touch the tablet.

There are more details on the “Touch Hold/Release” command in UC Lesson 4.5.

Keep in mind that the Touch Hold/Release method is one of several ways to control the scroll bar using Utter Command — if the combination is comfortable for you it’s a good one. If you need to be completely hands-free, see UC Lesson 1.8, which details all the ways you can use speech to navigate documents, and UC Lesson 9.5, which details Web navigation.

Happy navigating.

Discover, Adjust, Organize and Share

By Kimberly Patch

Keyboard shortcuts have a lot of potential. They’re fast.

For example, cutting and pasting by

– Hitting “Control x”
– Moving the cursor to the paste location
– Then hitting “Control v”

is speedier than

– Moving the mouse to the “Edit” menu
– Clicking “Edit ”
– Clicking “Cut”
– Moving the cursor to the paste location
– Moving back up to click “Edit ”
– Then clicking “Paste”.

Add this up over many tasks and you have a big difference in productivity.

So why don’t we see more people using keyboard shortcuts?

Ask someone who uses the mouse for just about everything and you’re likely to get a compelling answer — it’s easier. And it is — it’s cognitively easier to choose a menu item than to remember a shortcut.

Given a choice, people generally do what’s easier. On a couple different occasions I’ve heard  people say that, all else being equal, they’d hire a blind programmer over a sighted one because the blind programmer is faster. The blind programmer must use keyboard shortcuts.

This is a common theme  — we have something potentially better, but human behavior stands in the way of adoption.

In the case of keyboard shortcuts there’s a little more to the story, however.

As a software community we haven’t implemented keyboard shortcuts well.

Many folks know keyboard shortcuts for a few very common actions like cut, paste and bold, but it’s more difficult to come up with keyboard shortcuts for actions like adding a link or a hanging indent because they are used less often and are less likely to be the same across programs.

So the user is often stuck with different shortcuts for the same tasks in different programs, requiring him to memorize and keep track of multiple sets of controls. This is cognitively difficult for everyone, and more so for some disabled populations and the elderly.

This type of implementation is akin to asking someone to speak different languages depending on who they are speaking to. Depending on how motivated and talented they are, some folks may be able to do it, but not many. And if there’s an easier way, even those capable of doing it either way will often choose easier even if it’s less efficient.

So we aren’t letting keyboard shortcuts live up to their potential.

There’s a second keyboard shortcuts issue that’s getting worse as Web apps become more prevalent: clashing shortcuts. If you hit “Control f” in a Google document, do you get the Google Find facility or the browser Find facility? Go ahead and try it out. It’s messy.

This is already an issue in the assistive technology community, where people who require alternate input or output must use software that runs all the time in conjunction with everything else. For example, a speech engine must be on all the time listening for commands, and screen magnifier software must be running all the time to enlarge whatever you’re working in.

So there are two problems: keyboard shortcuts aren’t living up to their potential to increase efficiency, and, especially on the Web, keyboard shortcuts are increasingly likely to clash.

I think there’s a good answer to both problems: a cross-program facility to easily discover, adjust, organize and share shortcuts.

– We need to easily discover shortcuts in order to see them all at once so we can see patterns across programs and conflicts in programs/apps that may be opened at once.

– We need to easily adjust shortcuts so we can choose common shortcuts and avoid clashes. We need to organize so we can remember what we did.

– We need to easily arrange commands and add headings so we can find commands quickly and over time build a good mental map of commands.. Lack of ability to organize is the Achilles’ heel of many macro facilities. It’s like asking people to play cards without being able to rearrange the cards in their hand. It’s possible, but unless there’s a reason for it, makes things unnecessarily difficult.

– We need to share the adjustments because it makes us much more efficient as a community. My friend Dan, for instance, is very logical. He uses many of the same programs I do, and we both use speech input. So if there were a facility to discover, adjust, organize and share keyboard shortcuts, I’d look to see if Dan had posted his changes, and I would adjust to my needs from there.

The organizing and sharing parts are the most important, because they allow for crowdsourcing.

Over the past few decades the computer interface ecosystem has shifted from single, unrelated programs to separate programs that share information, to programs so integrated that users may not know when they are going from one to another. This has increased ease-of-use and efficiency but at the same time complicated program control.

At the same time programs have grown more sophisticated. There’s a lot of wasted potential in untapped features.

If we give users the tools to discover, adjust, organize and share, I bet we’ll see an increase in speed and efficiency and an uptick in people discovering nifty new program features.

Suggestion for Dragon: Easier Correction

In the last couple of months I’ve had a couple occasions to suggest to the folks at Nuance, the company that makes the Dragon NaturallySpeaking speech engine, that their “Resume With” command is under advertised. The command is very useful, but I keep meeting people who don’t know about it.

“Resume With” lets you change text on the fly. For instance, if you say “The black cat jumped over the brown dog”, then — once you see it on the screen — change your mind about the last bit and say “Resume With over the moon”, the phrase will change to “The black cat jumped over the moon.”

This is a particularly useful command for doing something people do a lot — change text as they dictate.

Now I have a suggestion that I think would make the command both better and more often used. Split “Resume With” into two commands: “Try Again” and “Change To”. The two commands would have the same result as “Resume With”, but “Try Again” would tell the computer that the recognition engine got it wrong the first time and you are correcting the error. “Change To” would tell the computer that you are simply changing text.

This would be a less painful way to correct text than the traditional correction box. Users are tempted to change text rather correct it because it’s easier. This would make it equally easy to correct and change using what is arguably the fastest and easiest way to make a change.

Easy correcting is important because NaturallySpeaking learns from correcting and because it’s annoying when the computer gets things wrong. Correcting improves recognition. Minimizing the interruption reduces frustration and lets users concentrate on their work rather than spending time telling Dragon how to do its job. From my observations, many users are tempted to change text rather than correct it when the computer gets something wrong simply because it’s easier.

It would be great to have these commands both in Dragon NaturallySpeaking on the desktop and in Dragon Dictation, the iPhone application. This would enable truly hands-free dictation in Dragon Dictation.

Speaking to Excel

I’ve gotten a lot of inquiries lately about using speech recognition in Excel.

The fastest way to learn to apply Utter Command to Excel is to read UC Lesson 10.9: Navigating, numbers, functions, selecting and formatting in tables and spreadsheets, and UC Lesson 10.10: Putting it all together in any program (say “UC Lesson 10 Point 9” and “UC Lesson 10 Point 10” to call them up). Then take a look at the Top Excel Guide, which opens a list of useful shortcuts along the right edge of your screen.

Here are some basics:

  • “Cell” followed by a letter and number jumps to any cell, e.g. “Cell B 2” or “Cell Bravo 2”
  • “Control Space” selects the row the cursor is on
  • “Shift Space” selects the column the cursor is on

Here are some particularly useful combinations:

  • A number followed by a direction selects cells — keep in mind you can select in two directions at once, e.g. “3 Rights · 5 Downs” to select 3 columns to the right and 5 rows down
  • “3 Downs · Control d” selects 3 rows down, then invokes the fill function to copy whatever was in the first row to the selected rows

And here’s a method that will save you time whether you use one formula or many:

Add the formulas you use to the Vocabulary Editor, with a comfortable spoken form. For instance “Equals Sum” to type “=SUM(”

To add a formula, say

  1. “NatSpeak Vocabulary”
  2. Speak the formula using “spell” to put the written form in the Written Form text box, e.g. “spell equals caps Sierra Uniform Mike close paren” to type “=SUM(“
  3. “1 Tab”
  4. Put a comfortable and memorable written form in the spoken Form dialog box, e.g. “equals sum”.
  5. “Enter”
  6. Escape” to exit or “Written Form” to add another.

Now every time you want to type “=SUM(“, say “equals sum”.

Tip: Finding a command

Here’s a very quick tip.

If you know the name of a command, or even part of it, and want to look it up in the Utter Command documentation, say

   – “UC Index” to bring up the Utter Command Index
   “Find Open” to put the cursor in the find dialog box
   – type a keyword you want to look for, for instance “Wait”, “Drag” or “Before”
   “Enter” to find the first instance
   – if necessary, “Enter” again to find subsequent instances

Once you find what you’re looking for, use the reference number to call up the full lesson on the command, e.g. “UC Lesson 4 Point 5”. This is also a good way to see the consistent patterns in the Utter Command speech command set.

Tell me what you think – reply here or let me know at info@ this website address.

Happy new year!

Trying out Dragon Search for the iPhone

Dragon Search is a nice app. Here’s how it works: open the app, hit one button, speak the phrase you want to search for. By default the app stops listening and starts the search when you pause so you don’t have to hit another button when you’re done.

The app comes up quickly, which from a practical standpoint is extremely important. And in my experience so far the search has been fast. There’s also a button you can push to cancel out of the search. The big plus of this application is the different search channels: Google, iTunes, Twitter, Wikipedia, and YouTube. You can search for something, like green apples, and the results will come up in the channel you used last. Once you’ve done a search you can switch channels easily to see results across channels.

I have a couple of practical suggestions.

1. The history list is just three items long — I’d like a much longer scrolling history list. Google Voice Search has a long scrolling list that includes dates. I would’ve liked to have seen Nuance improve on that.

2. I’d also like to be able to add my own channel.

I’ll also take the opportunity to repeat what I said a couple of days ago. I appreciate the progress on speech apps — don’t get me wrong. But speech on the iPhone is still not what I really want, which is system-level speech control of a mobile device that would give me the option to use speech for anything. These new apps are steps in the right direction — making the iPhone more hands-free. But there’s still a long way to go.

A few more thoughts on Dragon Dictation

I’ve been using Dragon Dictation on the iPhone a little more over the past few days and have a couple more thoughts for improvement.

1. If you select text in the full-screen application, then switch to the keyboard the text doesn’t stay selected. The text should stay selected. If you’ve selected an incorrect word or phrase, found there are no correct choices, and are proceeding to the keyboard to correct it. It’s frustrating to have to select again.

2. I’ve lost dictation a couple of times because I’ve switched out of the app — this is unexpected because writing apps like Notepad tend to stay where you left them. I suspect that Dragon Dictation maker Nuance made this choice in order to limit the number of steps for new dictation. I think there are ways to provide this valuable option without increasing steps. The quick solution would be a “remember last dictation option” in settings that would let the user decide which way to do it. Maybe a better solution would be adding a “continue” button to the bottom of the initial screen that would give you the option to continue. So if you wanted to start fresh you would press the main button in the middle of the screen, but if you wanted to continue you could press the smaller “continue” button at the bottom of the screen.

Trying out Dragon Dictation for the iPhone

I’ve been trying out the Dragon Dictation iPhone app. It’s still not what I really want, which is system-level speech control of a mobile device that would give me the option to use speech for anything. But it’s a step in the right direction of making the iPhone more hands-free.

Here’s how Dragon Dictation for the iPhone works: open the app, hit one button, speak up to 30 seconds of dictation, then hit another button to say you’re done. Your dictation shows up on the screen a few seconds later. Behind the scenes the audio file you’ve dictated is sent to a server, put through a speech-recognition engine, and the results sent back to your screen. Now you can add to your text by dictating again, or hit an actions button that gives you three choices: send what you’ve written to your e-mail app, send it to your text app, or copy it to the clipboard so you can paste it someplace else.

The recognition is usually fairly accurate in quiet environments. Not surprisingly, you get a lot of errors in noisy environments. To its credit, on a mobile device the built-in microphone is not optimal for speech-recognition. It does pretty well given these constraints.

Here’s a practical suggestion that should be easy to implement: Add a decibel meter so people can see exactly how much background noise there it is at any given time. This would make people more aware of background noise so they could set their expectations accordingly.

The interface for correcting errors is reasonable. Tap on a word and there are sometimes alternates available or you can delete it. Tap the keyboard button and you can use the regular system keyboard to clean things up.

I have two interface suggestions:

1. You can’t use the regular system copy and paste without going into the keyboard mode. You should be able to. I suspect this is fairly easy to fix.

2. There is no speech facility for correcting errors. I think there’s a practical fix here as well.

First, some background. Full dictation on a mobile device is tricky. Full dictation speech engines take a lot of horsepower. Dragon Dictation sidesteps the problem by sending the dictation over the network to a server running a speech engine. The trade-off is it’s difficult to give the user close control of the text — you must dictate in batches and wait briefly to see the results. This makes it more difficult to offer ways to correct using speech. But I think there is a good solution already in use on another platform.

Although it’s difficult to implement most speech commands given the server setup, the “Resume With” command that’s part of the Dragon NaturallySpeaking desktop speech application is a different animal. This command lets you start over at any point in the phrase you last dictated by picking up the last couple of words that will remain the same and dictating the rest over again.

This would make Dragon Dictation much more useful for people who are trying to be as hands-free as possible. It would also lower the frustration of misrecognitions and subtly teach people to dictate better.

It’s nice to see progress on mobile speech. I’m looking forward to more.