Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
  • Publication number: 20080256452
    Abstract: Control of objects in a virtual representation includes receiving signals from audio-only devices, and controlling states of the objects in response to the signals.
    Type: Application
    Filed: July 6, 2007
    Publication date: October 16, 2008
    Inventor: Philipp Christian Berndt
  • Patent number: 7421655
    Abstract: M_IDs corresponding to “not available” and “recommended 1” in respective setup items are read, and images corresponding to “recommended 1” and “not available” are read (S803). An image indicating “not available” and that indicating “recommended 1” corresponding to respective setup items are displayed at neighboring positions of GUI components of the respective setup items displayed on the display screen (S804).
    Type: Grant
    Filed: July 8, 2004
    Date of Patent: September 2, 2008
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hiromi Ikeda, Makoto Hirota
  • Patent number: 7412391
    Abstract: A user interface design apparatus which reduces the load associated with input operation by an author is provided. When a speech recognition grammar including a semantic structure generating rule is acquired, at least one semantic structure is extracted from the semantic structure generating rule included in the grammar. This semantic structure is presented to the author. The author can select the semantic structural element presented using an input device. Upon selection by the author is completed, the selected information is extracted and is reflected in user interface contents.
    Type: Grant
    Filed: November 22, 2005
    Date of Patent: August 12, 2008
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kenichiro Nakagawa, Makoto Hirota, Hiroki Yamamoto
  • Patent number: 7383188
    Abstract: The invention is a method for objects selection at a location comprising the steps of using a mobile computer having a bar code reader, a display, an audio output device, an audio input device, a tactile input device, text to speech software, a voice recognition software, objects selection applications software, and radio frequency identification (RFID) reader, wherein said mobile computer is adapted for communication between an order systems server and a user and the order systems server is adapted for communication between the mobile computer and at least one external computer system.
    Type: Grant
    Filed: November 9, 2006
    Date of Patent: June 3, 2008
    Assignee: Systems Application Engineering, Inc.
    Inventors: Jerry Dennis Sacks, James Michael Parks, Kenneth Ray Vestal
  • Publication number: 20080115063
    Abstract: Embodiments of media assembly are disclosed. In one method embodiment, the method includes manipulating at least one textual script file representing a number of component performance elements of a media program, presenting a visual representation of at least one of the number of component performance elements, cueing the artist to begin performing a take of a component performance element, capturing an artist's performance of at least one of the number of component performance elements, indicating whether mistakes were made by the artist during the take of the component performance element, and storing at least one recorded artist performance in memory.
    Type: Application
    Filed: November 13, 2006
    Publication date: May 15, 2008
    Inventor: Christopher J. Glenn
  • Patent number: 7352358
    Abstract: A method and system for enabling dynamic user interactivity between user actions and actions to be performed by a computer program is provided. The system includes a computing system and an acoustic detection analyzer that is coupled to or executed by the computing system. The system also includes an input device for interfacing with the computer program that is to be executed by the computing system. The input device has a gearing control for establishing a scaling between acoustic data produced at the input device and actions to be applied by the computer program as analyzed by the acoustic detection analyzer. The gearing can be set dynamically by the game, by the user, or can be preset by software or user configured in accordance with a gearing algorithm.
    Type: Grant
    Filed: May 6, 2006
    Date of Patent: April 1, 2008
    Assignee: Sony Computer Entertainment America Inc.
    Inventors: Gary M. Zalewski, Richard Marks, Xiadong Mao
  • Patent number: 7307615
    Abstract: A pointing device is provided for use in computing devices comprising a printed circuit board, a tracking device adapted to generate a tracking signal in response to a user vocal input and relay the tracking signal to the printed circuit board, and a selection device adapted to generate a selection signal in response to a user manipulation and relay the selection signal to the printed circuit board.
    Type: Grant
    Filed: August 8, 2003
    Date of Patent: December 11, 2007
    Assignee: Lucent Technologies Inc.
    Inventors: Narayan L. Gehlot, Victor B. Lawrence
  • Patent number: 7305694
    Abstract: System and method for automatically controlling a media receiver by instructing the media receiver to use a particular receiver connection and to play a selected media unit using one of a plurality of play modes according to characteristics of the media unit. Media units may be encoded using any of a variety of encoding formats. The media management system may interface with a media receiver to select media receiver connections in accordance with the media type of the media unit. The media management system may also interface with the media receiver to set media receiver settings for playing the selected media unit according to the media receiver settings selected for a play mode corresponding to the characteristics of the selected media unit.
    Type: Grant
    Filed: September 7, 2004
    Date of Patent: December 4, 2007
    Assignee: Digital Networks North America, Inc.
    Inventors: Christopher Commons, Marty R. Wachter, Robert A. Bouterse
  • Patent number: 7302644
    Abstract: An integrated, fully automated video production system that provides a video director with total control over all of the video production devices used in producing a show. Such devices include, but are not limited to, cameras, robotic pan/tilt heads, video tape players and recorders (VTRs), video servers and virtual recorders, character generators, still stores, digital video disk players (DVDs), audio mixers, digital video effects (DVE), video switchers, and teleprompting systems. The video production system provides an automation capability that allows the video director to pre-produce a show, review the show in advance of “air time,” and then, with a touch of a button, produce the live show. In one embodiment, the invention provides a video production system having a processing unit in communication with one or more of the video production devices mentioned above.
    Type: Grant
    Filed: April 15, 2002
    Date of Patent: November 27, 2007
    Assignee: Thomson Licensing
    Inventors: Alex Holtz, David E Buehnemann, Gilberto Fres, Harrison T Hickenlooper, III, Charles M Hoeppner, Kevin K Morrow, Bradley E Neider, Loren J Nordin, III, Todd D Parker, Robert J Snyder
  • Patent number: 7272794
    Abstract: Methods and systems are described that assist media players in rendering different media types. In some embodiments, a unified rendering area is provided and managed such that multiple different media types are rendered by the media player in the same user interface area. This unified rendering area thus permits different media types to be presented to a user in an integrated and organized manner. An underlying object model promotes the unified rendering area by providing a base rendering object that has properties that are shared among the different media types. Object sub-classes are provided and are each associated with a different media type, and have properties that extend the shared properties of the base rendering object. In addition, an inventive approach to visualizations is presented that provides better synchronization between a visualization and its associated audio stream.
    Type: Grant
    Filed: February 22, 2005
    Date of Patent: September 18, 2007
    Assignee: Microsoft Corporation
    Inventors: Tedd Dideriksen, Chris Feller, Geoffrey Harris, Michael J. Novak, Kipley J. Olson
  • Publication number: 20070186165
    Abstract: Methods, apparatus and computer-code for electronically providing advertisement are disclosed herein. In some embodiments, advertisements are provided in accordance with at least one feature of electronic media content of a multi-party conversation, for example, by targeting at least one advertisement to at least one individual associated with a party of the multi-party voice conversation. Optionally, the multi-party conversation is a video conversation and at least one feature is a video content feature. Exemplary features include but are not limited to speech delivery features, key word features, topic features, background sound or image features, deviation features and biometric features. Techniques for providing advertisements in accordance with any voice electronic media content, including but not limited to voice mail content, are also disclosed.
    Type: Application
    Filed: February 7, 2007
    Publication date: August 9, 2007
    Applicant: PUDDING LTD.
    Inventors: Ariel Maislos, Ruben Maislos, Eran Arbel
  • Patent number: 7254543
    Abstract: A television apparatus equipped with speech recognition function which allows the television apparatus to operate in accordance with a unit of language having an operable superordinate concept or makes the television apparatus operate after changing the operation mode, when a unit of language uttered by a user is unacceptable, is provided. The television apparatus according to the present invention judges whether it is possible for said television apparatus to operate in accordance with a speech code which is accepted under the current operation mode, and operates under this operation mode when it is possible to operate. When it is not possible to operate, if there is a speech code having an operable superordinate concept, the television apparatus operates in accordance with the speech code having the operable superordinate concept.
    Type: Grant
    Filed: December 16, 2002
    Date of Patent: August 7, 2007
    Inventors: Toshio Ibaraki, Masato Konishi, Hiroyuki Suda, Tsuyoshi Fukumoto
  • Patent number: 7222301
    Abstract: A network system for enabling voice interaction between communications-center applications and human agents remote from the center has a primary server connected to the network the server controlling at least one routing point used by the center, a secondary server connected to the network the secondary server for generating and serving voice extensible markup language, a voice gateway associated with the secondary server, the gateway for executing voice extensible markup language and recognizing speech input, and a software platform based in the primary server and distributed in part as a server application to the secondary server, the software suite functioning as a data transformation interface between the center applications and the gateway. In a preferred use agents and applications communicate bi-directionally using VXML.
    Type: Grant
    Filed: April 2, 2003
    Date of Patent: May 22, 2007
    Assignee: Genesys Telecommunications Laboratories, Inc.
    Inventors: Petr Makagon, Andriy Ryabchun, Nikolay Anisimov
  • Patent number: 7219123
    Abstract: A mobile information network browser device with audio feedback and adaptive personalization capability that is capable of transmitting a request for information via a wireless communication interface from one or more servers in an information network. The browser device further includes an audio interface capable of receiving data from the wireless communication interface that is responsive to the request for information. The browser device interfaces with a wireless communication network so that it may be used in a mobile vehicle, such as an automobile. The order in which individual pieces of content in the requested information is presented to the user is modified based on indicators of the user's interest in a topic during previous sessions. Such indicators can include whether the user input a command to skip, fast-forward, rewind, or request more detail about a category, subcategory, or topic of information. The adaptive personalization capability can also prevent redundant content from being presented.
    Type: Grant
    Filed: November 21, 2000
    Date of Patent: May 15, 2007
    Assignee: At Road, Inc.
    Inventors: Claude-Nicolas Fiechter, Amir Ben-Efraim, T Hea Nahm, David Hudson
  • Patent number: 7203907
    Abstract: A first-modality gateway and a second-modality gateway are synchronized, with both gateways interfacing between a user and a server system. The synchronizing allows the user to use either of the first-modality gateway or the second-modality gateway at a given point in time to interface with specific information in the server system. A method includes accessing a communication sent from a first-modality gateway, and providing a synchronizing mechanism in response to accessing the communication. Another method includes receiving a request for a first-modality data from a first-modality entity, determining a second-modality data, and providing the second-modality data to a second-modality entity, where the second-modality data corresponds to the first-modality data. An article includes a first-modality interface, a second-modality interface, and a controller interface.
    Type: Grant
    Filed: April 25, 2002
    Date of Patent: April 10, 2007
    Assignee: SAP Aktiengesellschaft
    Inventors: Jie Weng, Richard Swan, Hartmut Vogler, Samir Raiyani
  • Patent number: 7149694
    Abstract: Method and System to update grammar information in a voice access system that provides access to a data system. A user interface (UI) is provided to enable an administrator to select UI object pertaining to a user interface provided by the data system to provide grammar update support for. The data system user interface corresponds to a UI that users would see if using computer client connection to the data system. XSLT style sheets are built based on the UI objects that are selected for grammar update. A grammar update request may then submitted that identifies a navigation context of the data system UI, a style sheet to apply, and an optional last update value. In response to receiving the request, the system retrieves data from the data system pertaining to the navigation context; and filters the data using the identified style sheet and the last update value and provides the filtered data back to the voice access system to update its grammar.
    Type: Grant
    Filed: April 23, 2002
    Date of Patent: December 12, 2006
    Assignee: Siebel Systems, Inc.
    Inventors: Joseph Harb, David George, Chris Haven, Dennis Ferry, Wen-Hsin Lee, Jaya Srinivasan
  • Patent number: 7113169
    Abstract: This invention relates to the untethered multiple user interaction of large information displays using laser pointers coordinated with voice commands. A projection system projects application windows onto a large information display. One or more users may command their respective window applications using laser pointers and/or voice commands. A registration program assigns a unique identification to each user that associates a particular users's voice and a particular laser pointer pattern chosen by that user, with that particular user. Cameras scan the information display and process the composite of the application windows and any laser pointer images thereon. A sequence of computer decisions checks each laser pointer command so as to correctly associate respective users with their commands and application windows. Users may speak voice commands. The system will then perform speech recognition of the user's voice command.
    Type: Grant
    Filed: March 18, 2002
    Date of Patent: September 26, 2006
    Assignee: The United States of America as represented by the Secretary of the Air Force
    Inventors: Sakunthala Gnanamgari, Jacqueline Dacre Smith
  • Patent number: 7106298
    Abstract: A network-enabled user interface device, for example a VoIP telephony device, includes a display, a user input interface, an interface controller, and an application controller. The display is logically defined to include multiple distinct display areas. The interface controller is configured for generating display elements for the respective display areas based on received display requests, and controlling the user input interface based on received commands, and outputting responses to the application controller. The application controller is configured for supplying the commands to the interface controller and display requests based on execution of application operations. The application operations may be executed locally (i.e., within the user interface device), or remotely (e.g., by a server in communication with the user interface device). Remote application operations may include communications between the application controller and the remote server.
    Type: Grant
    Filed: September 19, 2001
    Date of Patent: September 12, 2006
    Assignee: Cisco Technology, Inc.
    Inventors: Bryan C. Turner, John Toebes
  • Patent number: 7107541
    Abstract: A message object corresponding to an input voice signal moves upward as time passes, like a bubble, after the voice signal is input. Accordingly, the user can recognize the input data easily and intuitively. The user selects a desired recognized message object and drags and drops the message object to a desired destination object. In this way, the user can transmit a voice signal associated with the message object to the terminal corresponding to the destination object.
    Type: Grant
    Filed: December 16, 2002
    Date of Patent: September 12, 2006
    Assignee: Sony Corporation
    Inventor: Toshiyuki Sakai
  • Patent number: 7099829
    Abstract: The present invention provides a method of dynamically displaying speech recognition system information. The method can include providing a single floating window for displaying frames of speech recognition system state information to a user. The frames can be varied according to trigger events detected in the speech recognition system. Each frame can differ from others of the frames according to the speech recognition system state information.
    Type: Grant
    Filed: November 6, 2001
    Date of Patent: August 29, 2006
    Assignee: International Business Machines Corporation
    Inventor: Felipe Gomez
  • Patent number: 7096163
    Abstract: A building construction drawing system incorporates a voice activated command recognition and actuation module to enable a building construction system designer, such as a fire sprinkler system designer, to be able to draw building construction systems, or components thereof, using simple and easy to remember voice commands. This building construction drawing system is easier and faster for the designer to learn and to use effectively because the designer does not need to know or learn how to find the command he or she wants using a traditional mouse or keyboard input device, which can be cumbersome and slow.
    Type: Grant
    Filed: February 22, 2002
    Date of Patent: August 22, 2006
    Inventor: Joseph P. Reghetti
  • Patent number: 7079711
    Abstract: The present invention describes a method of validating parameters defining an image, each parameter being represented by one of the tops of a polygon and being able to be associated with one or more functionalities. A point, being able to move in the polygon, makes it possible to validate the parameters and the associated functionalities according to the position of this point with respect to the tops of the polygon. The present invention also describes a search method including at least one parameter validation step as described above.
    Type: Grant
    Filed: March 12, 2002
    Date of Patent: July 18, 2006
    Assignee: Canon Kabushiki Kaisha
    Inventor: Lilian Labelle
  • Patent number: 7058890
    Abstract: A method and system that provides filtered data from a data system. In one embodiment the system includes an API (application programming interface) and associated software modules to enable third party applications to access an enterprise data system. Administrators are enabled to select specific user interface (UI) objects, such as screens, views, applets, columns and fields to voice or pass-through enable via a GUI that presents a tree depicting a hierarchy of the UI objects within a user interface of an application. An XSLT style sheet is then automatically generated to filter out data pertaining to UI objects that were not voice or pass-through enabled. In response to a request for data, unfiltered data are retrieved from the data system and a specified style sheet is applied to the unfiltered data to return filtered data pertaining to only those fields and columns that are voice or pass-through enabled.
    Type: Grant
    Filed: April 23, 2002
    Date of Patent: June 6, 2006
    Assignee: Siebel Systems, Inc.
    Inventors: David George, Joseph Harb, Chris Haven, Dennis Ferry, Wen-Hsin Lee, Java Srinivasan
  • Patent number: 7036080
    Abstract: A method and apparatus for providing speech control to a graphical user interface (GUI) divide a GUI into a plurality of screen areas; assign the screen areas priorities; receive a first audio input relating to the selection of one of the objects in the interface; determine the one of the screen areas having the highest priority and including a first object matching the first audio input; and select the first object in the determined screen area if the determined screen area only contains one object matching the first audio input. The method and apparatus also select one of the objects that matches the first audio input in the determined screen area if the determined screen area contains more than one object that matches the first audio input.
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: April 25, 2006
    Assignee: SAP Labs, Inc.
    Inventors: Frankie James, Jeff Roelands
  • Patent number: 7020841
    Abstract: Systems and methods are provided for rendering modality-independent scripts (e.g., intent-based markup scripts) in a multi-modal environment, whereby a user can interact with an application using a plurality of modalities (e.g., speech and GUI) with I/O events being automatically synchronized over the plurality of modalities presented. In one aspect, immediate synchronized rendering of the modality-independent document in each of the supported modalities is provided. In another aspect, deferred rendering and presentation of intent-based scripts to an end user is provided, wherein a speech markup language script (such as a VoiceXML document) is generated from the modality-independent script and rendered (via, e.g., VoiceXML browser) at a later time.
    Type: Grant
    Filed: June 7, 2001
    Date of Patent: March 28, 2006
    Assignee: International Business Machines Corporation
    Inventors: Paul M. Dantzig, Robert Filepp, Yew-Huey Liu
  • Patent number: 6934907
    Abstract: A method, program, and apparatus for providing a description of current position in an electronic document are provided. The invention first comprises parsing the electronic document into a parse tree. When the system receives a command from the user requesting current position in the electronic document, an algorithm performs a walk up the parse tree, from the current position to the root of the document. A position response, containing nodes in the walk up the parse tree, is constructed by the algorithm and reported to the user.
    Type: Grant
    Filed: March 22, 2001
    Date of Patent: August 23, 2005
    Assignee: International Business Machines Corporation
    Inventors: Thomas Andrew Brunet, Elliott Woodard Harris, Lawrence Frank Weiss, Guido Dante Corona
  • Patent number: 6931596
    Abstract: A system having a video display screen that provides video to a user. The position of the display screen is adjustable based upon the location of the user with respect to the display screen. The system includes at least one image capturing device trainable on a viewing region of the display screen and coupled to a control unit having image recognition software. The image recognition software identifies the user in an image generated by the image capturing device. The software of the control unit also generates at least one measurement of the position of the user based upon the detection of the user in the image.
    Type: Grant
    Filed: March 5, 2001
    Date of Patent: August 16, 2005
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Srinivas Gutta, Kaushal Kurapati, Antonio J. Colmenarez
  • Patent number: 6928614
    Abstract: A user interface for a mobile office is provided for allowing simple, safe, and convenient access to electronic mail, calendar, news, and web browser functions. The dialog or number of steps required to access desired items is minimized using a state controller responsive to voice commands and manual activations of reconfigurable steering wheel switches. Someone unfamiliar with the user interface is assisted by prompts for various commands and can use the mobile office without needing to resort to use of the reconfigurable steering wheel control elements. A more experienced user can bypass prompts by interrupting them with voice commands and can quickly move through various steps by utilizing the configurable steering wheel control elements to gain access to individual items within the mail, calendar, and news functions.
    Type: Grant
    Filed: October 13, 1998
    Date of Patent: August 9, 2005
    Assignee: Visteon Global Technologies, Inc.
    Inventor: Charles Allen Everhart
  • Patent number: 6920614
    Abstract: An entertainment system has a personal computer as the heart of the system with a large screen VGA quality monitor as the display of choice. The system has digital satellite broadcast reception, decompression and display capability with multiple radio frequency remote control devices which transmit self identifying signals and have power adjustment capabilities. These capabilities are used to provide context sensitive groups of keys which may be defined to affect only selected applications running in a windowing environment. In addition, the remote control devices combine television and VCR controls with standard personal computer keyboard controls. A keyboard remote also integrates a touchpad which is food contamination resistant and may also be used for user verification. Included in the system is the ability to recognize verbal communications in video signals and maintain a database of such text which is searchable to help identify desired programming in real time.
    Type: Grant
    Filed: December 20, 2001
    Date of Patent: July 19, 2005
    Assignee: Gateway Inc.
    Inventors: Jeffrey Schindler, Robert Moore, David S. Zyzda
  • Patent number: 6904566
    Abstract: Methods and systems are described that assist media players in rendering different media types. In some embodiments, a unified rendering area is provided and managed such that multiple different media types are rendered by the media player in the same user interface area. This unified rendering area thus permits different media types to be presented to a user in an integrated and organized manner. An underlying object model promotes the unified rendering area by providing a base rendering object that has properties that are shared among the different media types. Object sub-classes are provided and are each associated with a different media type, and have properties that extend the shared properties of the base rendering object. In addition, an inventive approach to visualizations is presented that provides better synchronization between a visualization and its associated audio stream.
    Type: Grant
    Filed: March 26, 2001
    Date of Patent: June 7, 2005
    Assignee: Microsoft Corporation
    Inventors: Chris Feller, Geoffrey Harris, Kipley J. Olson, Michael J. Novak, Tedd K Dideriksen
  • Patent number: 6895558
    Abstract: A system enables communication between server resources and a wide spectrum of end-terminals to enable access to the resources of both converged and non-converged networks via voice and/or electronically generated commands. An electronic personal assistant (ePA) incorporates generalizing/abstracting communications channels, data and resources provided through a converged computer/telephony system interface such that the data and resources are readily accessed by a variety of interface formats including a voice interface or data interface. A set of applications provides dual interfaces for rendering services and data based upon the manner in which a user accesses the data. An electronic personal assistant in accordance with an embodiment of the invention provides voice/data access to web pages, email, file shares, etc. A voice-based resource server authenticates a user by receiving vocal responses to one or more requests variably selected and issued by a speaker recognition-based authentication facility.
    Type: Grant
    Filed: February 11, 2000
    Date of Patent: May 17, 2005
    Assignee: Microsoft Corporation
    Inventor: Shawn D. Loveland
  • Patent number: 6892353
    Abstract: A method and apparatus is described that allows edited media to be recorded to a sequential storage device. An edited time based stream of information of a source media is displayed. The edited time based stream is transferred to a sequential storage device to be recorded using an icon where the icon represents a function to be performed on the storage device.
    Type: Grant
    Filed: April 2, 1999
    Date of Patent: May 10, 2005
    Assignee: Apple Computer, Inc.
    Inventor: Randy Ubillos
  • Patent number: 6882337
    Abstract: A virtual keyboard displayed on a touch sensitive screen allows a user to do touch-typing thereon to enter textual data into a computer. The keyboard image has a standard key layout for typewriting, and the keys are sized to allow the fingers of the user to take the positions necessary for “ten-finger” touch-typing in the standard fashion. The virtual keyboard image is semi-transparently displayed over on a background image, with the individual keys shown with shaded edges so that they can be easily distinguished from features in the background image. When a key is touched, a sound is generated. The sound generated when the touch is away from a target portion of the key is different from the sound generated when the touch is on or adjacent to the target portion of the key, thereby providing audio feedback to enable the user to adjust finger positions to maintain proper alignment with the virtual keys.
    Type: Grant
    Filed: April 18, 2002
    Date of Patent: April 19, 2005
    Assignee: Microsoft Corporation
    Inventor: Martin Shetter
  • Patent number: 6862713
    Abstract: A method for presenting to an end-user the intermediate matching search results of a keyword search in an index list of information. The method comprising the steps of: coupling to a search engine a graphical user interface for accepting keyword search terms for searching the indexed list of information with the search engine; receiving one or more keyword search terms with one or more separation characters separating there between; performing a keyword search with the one or more keyword search terms received when a separation character is received; and presenting the number of documents matching the keyword search terms to the end-user, and presenting a graphical menu item on a display. In accordance with another embodiment of the present invention, an information processing system and computer readable storage medium carries out the above method.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: March 1, 2005
    Assignee: International Business Machines Corporation
    Inventors: Reiner Kraft, W. Scott Spangler
  • Publication number: 20040260438
    Abstract: A method of synchronizing a voice user interface (VUI) with a graphical user interface (GUI) includes a number of steps. Initially, a first screen is displayed that has an associated first plurality of voice commands that are available for controlling a first application. Upon receiving an event, a second screen is displayed in response to the event with the second screen being associated with a second application. A second plurality of voice commands, which are available for controlling the second application, are then activated and the first plurality of voice commands are deactivated.
    Type: Application
    Filed: June 17, 2003
    Publication date: December 23, 2004
    Inventors: Victor V. Chernetsky, Mona L. Toms