Monthly Archives: January 2008

Talking to your telephone vs. talking to your computer

The SpeechTEK speech conference has a lot to say about the state of the desktop speech interface. The exhibits in and 2006 and 2007 were largely about where all the speech interface action is these days — not on the desktop, but over the telephone with interactive voice response (IVR) systems.

I went to several sessions aimed at the voice user interface designers (Vuids) who construct telephone speech command interfaces (even though I’m something of an imposter as a desktop voice user interface designer — I guess Dvuid would be the appropriate term).

We’re dealing with a lot of the same issues, though often with different twists:

  • Making sure people know what to say and stay oriented in the system
  • Accommodating beginners and experienced users
  • Making the process as fast and efficient as possible so people won’t hit the operator button or hang up (or not use the software — many people who buy desktop speech recognition software end up not using it)
  • In both cases the communications relationship is between a person and machine

And we’re looking at similar answers:

  • Making commands consistent
  • Avoiding ambiguity
  • Doing user testing
  • Thinking about configuring information in a certain order to make it more memorable (good mental maps and appropriate training wheels)
  • And above all avoiding the trap of thinking that people can just say anything because even if you truly could just say anything you still don’t know what to say

I’ve also been thinking about the differences between IVR and the desktop speech interface — these differences make the challenges more difficult or easier for each of the systems.

  • Desktop users tend to follow a more predictable curve — they get more experienced or drop it, while for some IVR systems you have occasional users.
  • People are more often forced to use IVR, while most people can easily avoid the desktop speech interface if they wish.
  • The desktop is capable of both visual and audio feedback, while IVR systems tend to only have audio feedback. (Interestingly, even though most speech engines come with the ability to speak, desktop computer interfaces generally don’t use this feedback channel. We’ve had positive results in user testing of judicious use of audio feedback.)
  • Both systems suffer from the widespread use of pseudo natural language. Natural language doesn’t really exist on either type of system and trying to fake natural language creates its own problems.

Outside the mouse and keyboard box

Here’s an attempt to explain the potential of the speech interface.

Controlling a computer using a mouse and keyboard is a very specific type of control, and for many years it was all we knew. This type of control still defines how we think about communicating with computers.

While it’s good to tap existing knowledge, it’s important not to let experience confine new methods of communication.

The way today’s speech interfaces work, speech commands often follow in the footsteps of the keyboard and mouse (“File”, “Open”, “Budget”, “Enter”) rather than tapping the full potential of speech (“Budget Folder”).

Think about the differences between road travel and air travel.

A plane goes faster than a car, so following a road from the air is faster than driving, and following roads might not be a bad idea at first to get your bearings. But the real power of air travel is the ability to travel any route, including over areas inaccessible by car like large bodies of water, mountain ranges and polar regions.

The Human-Machine Grammar that underpins Utter Command is aimed at mapping the best way to use speech to control the computer. The real power of speech is the ability to command the computer in ways not possible using the keyboard and mouse.

Here’s another metaphor:

In the days when cars that went 15 miles an hour were cutting-edge, this seemed fast — four times faster than walking and you didn’t have to expend energy. It may seem like working on a computer is fast today. It’s not. Speech has the potential to take us into another realm in terms of productivity.

Redstart Systems and Utter Command: a brief history

Redstart Systems was born of user frustration and disappointment with speech software that seemed to fall far short of its potential.

In 1993 the heavy computer use of a deadline journalist caught up with me in the form of severe repetitive stress injuries in both hands. The good news was desktop speech recognition had just arrived. The bad news was it wasn’t really practical to use. Recognition wasn’t great, commands were difficult to remember and commanding a computer using speech was just plain slow.

More than a decade later recognition has improved dramatically — the NaturallySpeaking speech engine makes dictating to a computer quite accurate. Commands, however, are still difficult to remember and often slower than the keyboard and mouse, making things like bringing up files, editing and formatting, cutting and pasting, setting up email messages, and Web research more than a little tedious.

Utter Command is the product of a decade of frustration and experimentation with a component of speech-recognition that has lagged behind efforts to improve dictation accuracy — the speech user interface, or words you use to control the computer.

Utter Command works the way your brain does and makes controlling your computer using speech commands cognitively easy and blazingly fast. Really. Commands are underpinned by an organized grammar system informed by cutting edge research in cognition, linguistics, networking and human behavior. This makes commands easy to remember and, more importantly, gives you the ability to combine commands, which not only speeds everything up but enables more than is possible using just the keyboard and mouse.

Instead of following in the footsteps of the keyboard and mouse, Utter Command allows you to fly along by carrying out many computer steps at once. Take a look at someone humming along on the keyboard and mouse and notice how many steps everything takes. Most of these steps are only necessary because keys and screen space are limited. If you don’t have to think between steps, there’s no need for separate steps other than to accommodate the computer.

Here’s a quiz for you:

How many steps should you have to go through to

a. Navigate to a folder you already know the name of

b. Navigate to a file you already know the name of

c. Set up an email message to a couple of friends and CC a couple more

d. Search for the definition of “prosody” on a particular Web site

(Our answers are at the bottom of this post.)

It’s high time we stopped accommodating the computer.

We’re getting ready — interface-wise — to cross over to a world where speech commands will untether you from the keyboard and kick your computer use into high gear.

In this world you’ll have choices — for everything you do on the computer you can use speech, the keyboard, or the mouse. And if you need to use speech all the time, Utter Command allows you to do everything by speech that you can using the mouse and keyboard.

a. 1 step b. 1 step c. 1 step d. 1 step

Welcome to the Redstart Systems Blog

Redstart Systems makes speech interface software that speeds computer use. We’ve just launched a pre-release version of Utter Command for NaturallySpeaking Professional.

Utter Command is the culmination of more than a decade of using and thinking about the speech interface. Utter Command is based on Human-Machine Grammar, a system of words and rules that follows the way the brain works. UC commands are concise, consistent and combinable, which makes for powerful, easy-to-use speech software.

There’s lots more to think about, as technological improvements to speech engine software and microphones, faster computers, smaller computers, and new technologies like portable projectors and electronic paper make it more and more practical to use speech to control machines.

In this blog I’ll explore all aspects of using speech to control a computer.