Audio User Interface Patents (Class 715/727)
-
Publication number: 20090177386Abstract: A method of operating a portable navigation device or navigation system is described, together, with a computer program and a PND and navigation system. The method includes the steps of providing for a specific premises, location or location range, being identifiable with reference to map data locally stored in the device or system and optionally being or including the current location, can be at least temporarily stored in memory. The method also includes the presenting to the user at least one user-selectable option by means of which qualitative information pertaining to the premises, location or range can be entered locally in the device or system, the selection of the option resulting in the immediate or subsequent recordal and storage of both the qualitative information and an association thereof with the identified premises, location or range.Type: ApplicationFiled: January 7, 2008Publication date: July 9, 2009Inventor: Eric Haase
-
Patent number: 7558735Abstract: A method and system for capturing and transcribing dictated information and for delivering the transcribed information to an end user. A voice file containing a digital recording of the dictated information is received via the Internet from a remote device. The voice file is forwarded via the Internet to a remote transcription service provider. The transcribed information is received from the transcription service provider and in turn delivered to, for example, a facsimile machine, a text server and/or printer, a computer system or Web browser. The need for expensive dictation equipment is reduced or eliminated. In addition, the system can be readily scaled up to accommodate more users without a commensurate increase in costs.Type: GrantFiled: December 28, 2000Date of Patent: July 7, 2009Assignee: Vianeta CommunicationInventor: Sridhar Obilisetty
-
Publication number: 20090162023Abstract: A method for setting menu options for a digital photo frame includes a memory configured for storing a plurality of multimedia files. The method further includes outputting a menu of at least one of a plurality of menu options, receiving inputs, determining current available system resources and available menu options associated with multimedia file in response to a selected operation, and generating a different menu on the menu to prevent system resource occupation. A digital photo frame with menu options setting function is also provided.Type: ApplicationFiled: September 3, 2008Publication date: June 25, 2009Applicants: HONG FU JIN PRECISION INDUSTRY (ShenZhen) CO., LTD., HON HAI PRECISION INDUSTRY CO., LTD.Inventors: XIAO-GUANG LI, KUAN-HONG HSIEH
-
Publication number: 20090164905Abstract: A mobile terminal including an output unit configured to output sound, an equalizer configured to adjust parameters of the sound output by the output unit, a display unit including a touch screen and configured to display a Graphic User Interface (GUI) including a graphical guide that can be touched and moved to adjust the parameters of the sound output by the output unit, and a controller configured to control the equalizer to adjust the parameters of the sound output by the output unit in accordance with a shape of the graphical guide that is touched and moved.Type: ApplicationFiled: December 19, 2008Publication date: June 25, 2009Applicant: LG Electronics Inc.Inventor: Dong-Seuck Ko
-
Patent number: 7552389Abstract: A computer program of the type commonly known as a “wizard” is disclosed that initializes user interface software for controlling an audio conferencing device. The wizard allows the desired audio inputs (e.g., microphone, telephones, etc.) and audio outputs (speakers, recording devices, etc.) to be chosen by an audio system administrator. Thereafter, the wizard allows an audio conferencing device (or devices) to be chosen by the administrator, or allows such a device(s) to be optimally chosen dependent upon the chosen inputs and outputs. The wizard then maps the inputs and outputs to the input and output ports on the audio conferencing device, a step which again can be performed manually by the administrator or automatically by the wizard. After reviewing the mapping results, the administrator finishes the wizard, which computes the mapping parameters and other audio-optimizing parameters for the selected inputs and outputs.Type: GrantFiled: August 20, 2003Date of Patent: June 23, 2009Assignee: Polycom, Inc.Inventors: Thomas M. Drewes, James S. Joiner, Michael A. Pocino, Craig H. Richardson
-
Publication number: 20090158158Abstract: A system for scheduling and transmitting messages is disclosed. The system stores a plurality of audio files in an audio database, generates a schedule of queued messages via the plurality of audio files, transmits the queued messages based on the schedule, and reconfigures the schedule based on a user interaction delivering the queued messages in accordance with the reconfigured schedule. A scheduled plurality of messages can be transmitted in a clear and professional manner. Additionally, “ad hoc” messages can be incorporated into the schedule without significantly disrupting the other messages.Type: ApplicationFiled: December 5, 2008Publication date: June 18, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: ROBERT S. HOBLIT
-
Patent number: 7549123Abstract: Techniques for mixing multiple input channel signals into multiple output channel signals are provided. A graphical user interface (GUI), which includes multiple indicators, is displayed. The input channel signals are mixed to produce multiple output channel signals. The mixing is performed based on the distance between the indicators' positions in the GUI. According to one embodiment of the invention, the mixing is also performed based on the angle formed between the indicators. Thus, the extent to which an input channel signal is carried by an output channel signal is, in one embodiment of the invention, a function of both the distance between the indicators and an angle formed by the indicators in the GUI.Type: GrantFiled: June 15, 2005Date of Patent: June 16, 2009Assignee: Apple Inc.Inventors: William George Stewart, Michael Stephen Hopkins
-
Publication number: 20090150785Abstract: An input device for inputting a command to an electronic system such as an on-board navigation system includes a microphone for inputting voice, a voice recognizer for analyzing the inputted voice and comparing it with data stored in a voice recognition dictionary, a touch panel displaying keys corresponding to the inputted voice and a controller for controlling operation of the input device. User's voice inputted from the microphone is fed into the voice recognizer to calculate a degree of coincidence with the data in the voice recognition dictionary. The keys corresponding to the inputted voice having a high degree of coincidence are displayed on the touch panel in an enlarged size. The enlarging rates may be determined according to the degree of coincidence. The user is able to finalize, easily and quickly, the keys constituting a command by touching the panel because candidate keys are enlarged.Type: ApplicationFiled: November 25, 2008Publication date: June 11, 2009Applicant: DENSO CORPORATIONInventors: Katsushi Asami, Ichiro Akahori
-
Patent number: 7546531Abstract: A rendering engine that enables access to alternate content determines, based on an accessibility mode, a list of focusable elements associated with a document to be rendered. If the accessibility mode is inactive, the list of focusable elements includes elements that are, by default, focusable. If the accessibility mode is active, the list of focusable elements also includes elements that have associated alternate content, but that are not, by default, focusable. When an accessibility mode is active and an element with associated alternate content is selected, the alternate content is rendered.Type: GrantFiled: November 21, 2003Date of Patent: June 9, 2009Assignee: Microsoft CorporationInventor: Tantek Celik
-
Publication number: 20090144626Abstract: An online identity may selectively control perceptibility of incoming sounds associated with electronic messages between online identities (FIG. 4, 400). A first online identity is provided with two or more sound control options to selectively control rendering of one or more sounds associated with electronic messaging to the first online identity from a second online identity, and two or more control sound options to selectively control rendering of one or more sounds associated with electronic messaging to the first online identity from a third online identity (405). The selected sound control options associated with electronic messaging from at least one of the online identities are stored (410) and one or more sounds from at least one of the second online identity or the third online identity are received (415). The perceptibility of sound to the first online identity is selectively controlled in accordance with a selected sound control option from the first online identity (420).Type: ApplicationFiled: October 11, 2006Publication date: June 4, 2009Inventors: Barry Appelman, Brian D. Heikes, W. Karl Renner
-
Patent number: 7543235Abstract: Methods and systems for creating and rendering skins are described. In one described embodiment, a method of providing a skin model for use in rendering a skin comprises receiving a skin definition file that contains information associated with a skin, and one or more other files that are associated with the skin; providing at least some of the one or more other files directly into computer memory, without the files entering a computer file system; and processing the skin definition file to provide a hierarchical data structure that describes the skin.Type: GrantFiled: May 13, 2005Date of Patent: June 2, 2009Assignee: Microsoft CorporationInventors: Michael J. Novak, David M. Nadalin, Kipley J. Olson, Kevin P. Larkin, Frank G. Sanborn
-
Patent number: 7539618Abstract: A system for operating an electronic device enabling the same agent software to be used in common among a plurality of devices, where a car navigation system or audio system, when the agent software and voice recognition engine are transferred from a portable data terminal, runs the transferred agent software so as to display a simulated human animated character which converses with a user, recognizes speech obtained from that conversation by a voice recognition engine, prepares script reflecting the content of the conversation, and executes the prepared script to perform predetermined processing.Type: GrantFiled: November 22, 2005Date of Patent: May 26, 2009Assignee: DENSO CORPORATIONInventor: Ichiro Yoshida
-
Publication number: 20090117945Abstract: A hands-free device (1) that is linked via a short range radio frequency (RF) communications link (2) to a mobile communications device (3) supports and provides a number of functions, such as user actions and internal events, such as incoming calls. The user operable functions of the hands-free device (1) are arranged in a hierarchical menu structure that a user can navigate through and around and select from using a user operable input means of the hands-free device (1). Each function or option that can be selected in the menu structure has associated with it a spoken prompt that is automatically provided to the user via the hands-free device (1) when that function is reached in the menu structure.Type: ApplicationFiled: July 21, 2006Publication date: May 7, 2009Applicant: SouthWing S.L.Inventors: Sergio Mahler, Sergi Torrents, Marc Molina, Jean-Regis Ferraton
-
Patent number: 7529674Abstract: Methods and systems, including computer program products, for speech animation. The system includes a speech animation engine and a client application in communication with the speech animation engine. The client application sends a request for speech animation to the speech animation engine. The request identifies data to be used to generate the speech animation, where speech animation is speech synchronized with facial expressions. The client application receives a response from the speech animation engine. The response identifies the generated speech animation. The client application uses the generated speech animation to animate a talking agent displayed on a user interface of the client application.Type: GrantFiled: March 8, 2004Date of Patent: May 5, 2009Assignee: SAP AktiengesellschaftInventors: Li Gong, Townsend Duong, Andrew Yinger
-
Publication number: 20090113305Abstract: The present invention describes a system and method for planning and authoring an audio and/or video guided tour of an exhibition space, such as a museum or gallery. A mapmaking tool is provided whereby the user can graphically map the exhibition space and the location of the exhibits within that space. An authoring tool is also provided whereby the user can, with the resulting map of an exhibition space resulting from the mapmaking tool, record audio tours for each exhibit. If the location of an exhibit is changed, the associated audio and/or video tour remains associated with such exhibit.Type: ApplicationFiled: March 19, 2008Publication date: April 30, 2009Inventors: Elizabeth Sherman Graif, Robert Klerer
-
Patent number: 7526431Abstract: Alphabetic filtering of the speech recognition of words uses a key press to indicate a desired character in an alphabetic filter string, where each key press represents two or more letters. The key presses can be disambiguated by recognizing a key-disambiguation utterance in association with a given key press. A user can select a desired recognition candidate from a choice list produced by such filtered word recognition. Ambiguous alphabetic filtering can be performed iteratively in response to the addition of successive ambiguous key presses. A user can select to re-recognize the utterance using filtering based on ambiguous key input after seeing the results of recognition without such filtering. Unambiguous alphabetic filtering can be performed by using multiple presses of an ambiguous key to disambiguate which letter is intended. A user can select between entering text by either large vocabulary speech recognition or by spelling text by pressing phone keys.Type: GrantFiled: September 24, 2004Date of Patent: April 28, 2009Assignee: Voice Signal Technologies, Inc.Inventors: Daniel L. Roth, Jordan R. Cohen, David F. Johnston
-
Patent number: 7526378Abstract: The present invention includes a mobile device that is capable of storing media and optionally a wireless network connection for streaming live data between the device and the database. The data from the orientation sensor and the position sensor both go directly into the mobile device as input to a controller in the device. The controller controls in part an audio/video output that is modulated based upon the relative position of the user to an object of interest as well as the user's orientation.Type: GrantFiled: November 22, 2005Date of Patent: April 28, 2009Inventor: Ryan T. Genz
-
Patent number: 7509593Abstract: A distance between a cursor and an object displayed on a Web page or other image automatically controls a volume with which an audio file associated with the object is played. A user can thus explore a displayed image to discover audio files associated with different objects or portions of the displayed image. The audio files can provide instructions, data, music, sound effects, or almost any other form of audible sound desired. The designer and/or the user of the displayed image can set parameters that control how the audio files are played, such as the maximum distance of a cursor from an object to initiate play, the relative priority of the objects, and the maximum number of audio files that are simultaneously played.Type: GrantFiled: May 12, 2005Date of Patent: March 24, 2009Assignee: Microsoft CorporationInventor: Takehiro Kaminagayoshi
-
Publication number: 20090076726Abstract: Acquiring structural defects and vegetation conditions on electric transmission lines and the right of way a touch screen laptop including memory capability associated with a GPS unit. Operating the laptop off of store geographic coordinates of structures and acquiring such coordinates when desired. The touch screen has buttons corresponding to the conditions for entry to memory a record of conditions corresponding to that ascribed to the button. Capturing a structure for data entry when a preselected distance exists between the laptop and a structure and further programmed to enter into memory any condition entries through the touch screen, releasing when the distance is exceeded, and recording to memory all entries made during capture. Photos can be taken of a condition and a voice note can be generated for a condition, both being recorded in memory with condition entries and all indexed to the captured structure.Type: ApplicationFiled: July 23, 2008Publication date: March 19, 2009Inventors: Joseph A. Gemignani, JR., Jeffrey D. Young
-
Patent number: 7505032Abstract: Disclosed is a mouse device in which an audio signal supplied from a computer or the like is regenerated through a mouse body itself. The mouse device includes a body housing a click button positioned on an upper side thereof for operating the mouse device, and a mouse module installed therein for processing a sensing signal according to the movement thereof and a click signal generated by clicking the click button and then inputting the signals to an external computer; and an exciter attached to one side of the body housing for applying a corresponding sound wave to the body housing when an audio signal is applied thereto so that the body housing is substantially vibrated to regenerate sound. This mouse device may generate sound through the mouse body itself though there is not equipped any external speaker.Type: GrantFiled: October 17, 2005Date of Patent: March 17, 2009Assignee: Soundscape Co., Ltd.Inventor: Jong-Hyun Shin
-
Patent number: 7504572Abstract: A sound is output by calculating a change in coordinate data as a vector and generating sound data corresponding to the calculated vector, so that sounds can be freely obtained without being limited by the size of or positions on an input coordinate plane. A sound generating apparatus 10 includes a coordinate input device 12 for inputting coordinate data, a main control device 14, an acoustic device 16, and a display device 18. The main control device 14 includes: a motion calculation unit 20 that calculates a vector between two successive sets of the coordinate data input with a predetermined time interval; a sound data generating unit 22 that generates the sound data based on the calculated vector; a musical instrument data generating unit and displayed-color data generating unit 24 that serves both functions of generating musical instrument data and generating displayed-color data based on the coordinate data; a data transfer and saving unit 26; and a MIDI sound source 28 controlled by the sound data.Type: GrantFiled: July 7, 2005Date of Patent: March 17, 2009Assignee: National University Corporation Kyushu Institute of TechnologyInventor: Shunsuke Nakamura
-
Patent number: 7502742Abstract: A method and system for error prevention and recovery of voice activated navigation through a menu having plural nodes provides situation dependent utterance verification by relating confirmation to utterance determination confidence levels. In one embodiment, a high confidence level results in implicit confirmation, a medium confidence level results in explicit confirmation and a low confidence level results in a concise interrogative prompt of a single word that requests the user to repeat the utterance. In situations where voice recognition is difficult, dual modality with DTMF navigation is provided as an option for menu selections.Type: GrantFiled: May 22, 2006Date of Patent: March 10, 2009Assignee: AT&T Labs, Inc.Inventors: Benjamin Anthony Knott, John Mills Martin, Robert Randal Bushey, Tracy Leigh Smart
-
Patent number: 7500192Abstract: A process of selecting a recording on an audiovisual reproduction system consists of displaying a number of windows on a touch screen as an interface with a user. Items of information are stored in a bulk memory and are representative of an image of the album cover that is associated with each window and whose corresponding musical recording is stored in the bulk memory of the reproduction system. Each zone of a window is associated, via the touch-screen interface software, with at least one address for accessing the items of information in the database that is stored in the bulk memory belonging to the album cover whose image is displayed in the window that is touched by the user.Type: GrantFiled: June 26, 2001Date of Patent: March 3, 2009Inventor: Tony Mastronardi
-
Patent number: 7500193Abstract: To facilitate the use of audio files for annotation purposes, an audio file format, which includes audio data for playback purposes, is augmented with a parallel data channel of line identifiers, or with a map associating time codes for the audio data with line numbers on the original document. The line number-time code information in the audio file is used to navigate within the audio file, and also to associate bookmark links and captured audio annotation files with line numbers of the original text document. An annotation device may provide an output document wherein links to audio and/or text annotation files are embedded at corresponding line numbers. Also, a navigation index may be generated, having links to annotation files and associated document line numbers, as well as bookmark links to selected document line numbers.Type: GrantFiled: August 18, 2005Date of Patent: March 3, 2009Assignee: Copernicus Investments, LLCInventors: Steven Spielberg, Samuel Gustman
-
Patent number: 7496511Abstract: One embodiment of the present invention provides a system that facilitates recognizing voice input. During operation, the system receives a document that includes a specification of a datatype for which there exists a predefined grammar. The system also obtains a locale attribute for the datatype, wherein the locale attribute identifies a version of a language that is spoken in a locale. Next, the system uses the locale attribute to lookup a locale-specific grammar for the datatype, and then communicates the locale-specific grammar to a speech recognition engine, thereby allowing the speech recognition engine to use the locale-specific grammar to recognize voice input for the datatype.Type: GrantFiled: May 29, 2003Date of Patent: February 24, 2009Assignee: Oracle International CorporationInventor: Ashish Vora
-
Patent number: 7493560Abstract: A method and apparatus for definition links for online documentation is provided. According to one aspect of the invention, the definition links for online documentation provide a non-sighted or visually impaired user employing a screen reading utility a method to select a definition link from an online document which in turn navigates the user to a corresponding link definition area of the online document. Once the user has finished reading the definition, the user can easily be navigated back to the original text area of the document. In one embodiment, the definition links for online documentation provides footnote information. In one embodiment, a method of accessing footnote links and corresponding footnote definitions is provided. The embodiments contained herein for definition links for online documentation adhere to United States Federal Section 508 guidelines for accessibility standards.Type: GrantFiled: October 22, 2002Date of Patent: February 17, 2009Assignee: Oracle International CorporationInventors: Ken Kipnes, Donald Elliott Raikes, Edna C. Elle
-
Patent number: 7493559Abstract: The system includes an image display system, a direct annotation creation module, an annotation display module, a vocabulary comparison module and a dynamic updating module. These modules are coupled together by a bus and provide for the direct multi-modal annotation of media of media objects. The direct annotation creation module creates annotation objects. The annotation display module works in cooperation with the image display system to display the annotations themselves or graphic representations of the annotation positioned relative to the images of the objects. The system automatically creates the annotation, associates it with the selected images, and displays either a graphic representation of the annotation or a text translation of the audio input.Type: GrantFiled: January 9, 2002Date of Patent: February 17, 2009Assignee: Ricoh Co., Ltd.Inventors: Gregory J. Wolff, Peter E. Hart
-
Patent number: 7490313Abstract: Control patterns are used to describe functionality that may be exposed by one or more types of elements or controls. Functionality that is common among two or more types of elements is described by the same control pattern. Certain predefined methods, structures, properties, and/or events may be associated with a particular control pattern. Elements that support the control pattern, when queried, return an interface that describes those methods, structures, properties, and/or events. Control patterns are mutually exclusive in the functionality they represent, so they may be combined in arbitrary ways to expose the complete set of functionality offered by a particular control.Type: GrantFiled: May 17, 2003Date of Patent: February 10, 2009Assignee: Microsoft CorporationInventors: Robert E. Sinclair, Patricia M. Wagoner, Heather S. Burns, Paul J. Reid, Brendan McKeon
-
Patent number: 7487453Abstract: A method is provided that includes receiving a user input, the user input having been input in a user interface in one of multiple modalities. The method also includes accessing, in response to receiving the user input, a multi-modality content document including content information and presentation information, the presentation information supporting presentation of the content information in each of the multiple modalities. In addition, the method includes accessing, in response to receiving the user input, metadata for the user interface, the metadata indicating that the user interface provides a first modality and a second modality for interfacing with a user. First-modality instructions are generated based on the accessed multi-modality content document and the accessed metadata, the first-modality instructions providing instructions for presenting the content information on the user interface using the first modality.Type: GrantFiled: March 24, 2006Date of Patent: February 3, 2009Assignee: SAP AGInventors: Steffen Goebel, Kay Kadner, Christoph Pohl, Falk Hartmann
-
Patent number: 7487451Abstract: Methods, systems, and products are disclosed for creating a voice response grammar in a voice response server including identifying presentation documents for a presentation, each presentation document having a presentation grammar. Typical embodiments include storing each presentation grammar in a voice response grammar on a voice response server. In typical embodiments, identifying presentation documents for a presentation includes creating a data structure representing a presentation and listing at least one presentation document in the data structure representing a presentation. In typical embodiments listing the at least one presentation document includes storing a location of the presentation document in the data structure representing a presentation and storing each presentation grammar includes retrieving a presentation grammar of the presentation document in dependence upon the location of the presentation document.Type: GrantFiled: December 11, 2003Date of Patent: February 3, 2009Assignee: International Business Machines CorporationInventors: William Kress Bodin, Michael John Burkhart, Daniel G. Eisenhauer, Daniel Mark Schumacher, Thomas J. Watson
-
Patent number: 7485796Abstract: An apparatus and a method for providing music file search function. The apparatus includes an input unit that receives an input of an attribute of a music file to be played, an extract unit that extracts a characteristic segment from the music file according to the input attribute, a 3D-sound generating unit that generates 3D sound from the characteristic segment of the music file along a spatial axis corresponding to the attribute, and an output unit that outputs the 3D sound. The method includes the steps of inputting an attribute of a music file to be played, searching a characteristic segment of the music file according to the input attribute, generating 3D sound from the characteristic segment of the music file along a spatial-axis corresponding to the attribute, and outputting the 3D sound.Type: GrantFiled: May 10, 2006Date of Patent: February 3, 2009Assignee: Samsung Electronics Co., Ltd.Inventors: Hyeon Myeong, Chang-kyu Choi, Yeun-bae Kim, Min-kyu Park, Yong-beom Lee
-
Patent number: 7483541Abstract: In a digital mixer including a display, cursor controls, an increase/decrease control, and a plurality of channel strips for controlling parameters of input channels associated therewith, on a control panel, the channel strips each having a selection switch, an assignment switch is provided for assigning any one parameter among the parameters of the input channel to the increase/decrease control, so that where operation of the increase/decrease control is detected, when no selection switch has been operated, a value of a parameter displayed at the position of the cursor is changed in accordance with the operation of the increase/decrease control, while when any selection switch has been operated, a value of a parameter assigned to the increase/decrease control among parameters of an input channel corresponding to a channel strip having the selection switch is changed.Type: GrantFiled: March 19, 2004Date of Patent: January 27, 2009Assignee: Yamaha CorporationInventor: Hideki Hagiwara
-
Patent number: 7480865Abstract: An auxiliary operation interface of a digital recording/reproducing apparatus includes a targeting item, a switching button set and an audio prompt generator. The targeting item is optionally triggered to have the digital recording/reproducing apparatus execute a selected function. The audio prompt generator is enabled to generate an audio prompt when the targeting item is triggered. The audio prompt generator is optionally enabled or disabled by an operation of the switching button set.Type: GrantFiled: October 20, 2005Date of Patent: January 20, 2009Assignee: Lite-On It Corp.Inventor: Chia-Hsiang Lin
-
Publication number: 20090019367Abstract: A collaboration architecture supports virtual meetings, including web conferencing and collaboration. Presence information is aggregated from different types of communication services to provide a generic representation of presence. In one implementation, collaboration lifecycle management is provided to manage meetings over the lifecycle of a project. Audio options include voice over internet protocol (VoIP) and conventional PTSN phone networks, which are supported in one implementation by an audio conferencing server.Type: ApplicationFiled: May 14, 2007Publication date: January 15, 2009Applicant: Convenos, LLCInventors: Rebecca Cavagnari, Marshall Moseley, Sebastian Torf, Thomas Torf
-
Publication number: 20090013254Abstract: Various methods and systems are provided for auditory display of menu items. In one embodiment, a method includes detecting that a first item in an ordered listing of items is identified; and providing a first sound associated with the first item for auditory display, the first sound having a pitch corresponding to the location of the first item within the ordered listing of items.Type: ApplicationFiled: June 13, 2008Publication date: January 8, 2009Applicant: Georgia Tech Research CorporationInventors: Bruce N. Walker, Pavani Yalla
-
Patent number: 7461344Abstract: A control system for a modular, mixed initiative, human-machine interface. The control system comprises moves, the moves defining units of interaction about a topic of information. The moves comprise at least one system move and at least one user move. Each system move is structured such that it contains information regarding pre-processing to be performed, information to develop a prompt to be issued to the user and information that enables possible user moves which can follow the system move to be listed. Each user move is structured such that it contains information relating to interpretation grammars that trigger the user move, information relating to processing to be performed based upon received and recognized data and information regarding the next move to be invoked. A corresponding method is provided.Type: GrantFiled: May 1, 2002Date of Patent: December 2, 2008Assignee: Microsoft CorporationInventors: Steven Young, Stephen Potter, Renaud J. Lecoeuche
-
Patent number: 7461040Abstract: A strategic method for process control wherein the method includes, for a predetermined process juncture, the steps of: (A) defining an interconnection cell having associated therewith (i) at least one set of input data or at least one set of process control parameters, and (ii) at least one set of output data; (B) assigning at least one boundary value to at least one set of the sets associated with the defined interconnection cell; (C) using the assigned at least one boundary value, forming a plurality of discrete respective set combinations, and (D) for the interconnection cell, processing data from the plurality of respective formed set combinations into respective corresponding data record clusters. The strategic method for process control is a continuation-in-part of a Knowledge-Engineering Protocol-Suite, (U.S. patent application Ser. No. 09/588,681 filed on 7 Jun.Type: GrantFiled: August 7, 2000Date of Patent: December 2, 2008Assignee: Insyst Ltd.Inventors: Arnold J. Goldman, Joseph Fisher, Jehuda Hartman, Shlomo Sarel
-
Patent number: 7451399Abstract: Methods and systems for creating and rendering skins are described. In one described embodiment, a skin is defined using at least one skin definition that defines the skin in a hierarchical tag-based language.Type: GrantFiled: May 13, 2005Date of Patent: November 11, 2008Assignee: MicrosoftInventors: Michael J. Novak, David M. Nadalin, Kipley J. Olson, Kevin P. Larkin, Frank G. Sanborn
-
Publication number: 20080263451Abstract: The invention describes a method for driving multiple applications (A1, A2, A3, . . . , An) by a common dialog management system (1). Therein, a unique set of auditory icons (S1, S2, S3, . . . , Sn) is assigned to each application (A1, A2, A3, . . . , An). The common dialog management system (1) informs a user of the status of an application (A1, A2, A3, . . . , An) by playback, at a specific point in a dialog flow, of a relevant auditory icon (I1, I2, I3, . . . , In) selected from the unique set of auditory icons (S1, S2, S3, . . . , Sn) of the respective application (A1, A2, A3, . . .Type: ApplicationFiled: March 21, 2005Publication date: October 23, 2008Applicant: KONINKLIJKE PHILIPS ELECTRONIC, N.V.Inventors: Thomas Portele, Barbertje Streefkerk, Jurgen Te Vrugt
-
Patent number: 7441192Abstract: Systems and methods for providing a user-definable multimedia or digital library are disclosed. Also disclosed are systems and methods for selecting and playing multimedia or digital files from within the library. The selection and playback systems and methods involve a limited number of user activated buttons, which are implemented both for mapping directly to storage locations of particular multimedia or digital files, and for accepting and playing a multimedia or digital file once selected. The limited number of buttons required for the various features of these systems and methods provide vehicle operators, such as automobile drivers, with a safe mechanism and procedure for retrieving and playing customized play lists and particular songs while driving.Type: GrantFiled: December 5, 2002Date of Patent: October 21, 2008Assignee: Toyota Motor Sales U.S.A., Inc.Inventor: James T. Pisz
-
Patent number: 7437296Abstract: A program guidance apparatus includes a recognition word storage unit (105) operable to store a past recognition word that is recognized by speech recognition in the past, a viewing history word storage unit (106) operable to store viewing history words that are the information of a viewed program and a dictionary creating unit (103) operable to create a customized recognition dictionary that is created by adding the past recognition word and viewing history words that are not included in the basic recognition dictionary to the basic recognition dictionary and another customized recognition dictionary to which weights are assigned using “item weight coefficient” according to the categories of words and “history weight coefficient” according to whether or not the word is recorded as a past recognition word or viewing history words.Type: GrantFiled: March 10, 2004Date of Patent: October 14, 2008Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Tsuyoshi Inoue, Makoto Nishizaki, Tomohiro Konuma
-
Patent number: 7434178Abstract: A communication device for communicating with an external is connectable to a navigation apparatus. The navigation apparatus includes a display portion, an input portion for inputting memo information, an outgoing/incoming call determination portion, and a display control portion for controlling the display portion. The outgoing/incoming call determination portion determines whether or not the communication portion conducts an outgoing call and determining whether the communication portion receives an incoming call. When the outgoing/incoming call determination portion concludes that the communication portion conducts the outgoing call or that the communication portion receives the incoming call, the display control portion splits the display portion into a memo screen on which content of the memo information is displayed and a navigation screen.Type: GrantFiled: May 15, 2003Date of Patent: October 7, 2008Assignee: Fujitsu Ten LimitedInventor: Katsumi Sakata
-
Publication number: 20080235582Abstract: A method for and system for communicating using a virtual world are disclosed. In the method an avatar may be associated with a source of an email. An email may be generated within the virtual world and one or more images of an avatar may be associated with the email. The email may be sent to a real device and the one or more images may be presented at a destination of the email. The system may comprise one or more processors configured to generate a virtual world; associate an avatar with a source of an email; generate an email within the virtual world; associate one or more images of an avatar with the email; send the email to a real device; and present the one or more images at a destination of the email.Type: ApplicationFiled: March 5, 2007Publication date: September 25, 2008Applicant: Sony Computer Entertainment America Inc.Inventors: Gary Zalewski, Tomas Gillo, Mitchell Goodwin, Scott Waugaman, Attila Vass
-
Publication number: 20080229399Abstract: Multiple access internet portals are provided. A representative system, among others, includes a communication facility and a wireless internet server. The communication facility is operable to connect to a plurality of wireless devices through a mobile network. The wireless internet server is coupled to the communication facility and retrieves a personalized profile associated with a registered user an one of the plurality of wireless devices, and provides substantially similar personalized content to said at least one registered user on a variety of platforms associated with the wireless devices. Methods and other systems for multiple access portals are also provided.Type: ApplicationFiled: April 24, 2008Publication date: September 18, 2008Applicants: CorporationInventors: Douglas R. O'Neil, Jose F. Rivera
-
Publication number: 20080229206Abstract: Systems, apparatus, methods and computer program products are described below for using surround sound to audibly describe the user interface elements of a graphical user interface. The position of each audible description is based on the position user interface element in the graphical user interface. A method is provided that includes identifying one or more user interface elements that have a position within a display space. Each identified user interface element is described in surround sound, where the sound of each description is positioned based on the position of each respective user interface element relative to the display space.Type: ApplicationFiled: March 14, 2007Publication date: September 18, 2008Applicant: APPLE INC.Inventors: Eric Taylor Seymour, Richard W. Fabrick, Patti Pei-Chin Hoa, Anthony E. Morales
-
Publication number: 20080220751Abstract: The present invention envisages a GSM mobile telephone in which a line of icons is displayed on a display. As a user navigates through the displayed line of icons, the positions of the icons alter so that the selectable icon moves to the head of the line. This approach makes it very clear (i) which icon is selectable at any time and (ii) where that icon sits in relation to other icons at the same functional level (e.g. only first level icons will be present in one line). First level icons typically relate to the following functions: phonebook; messages; call register; counters; call diversion; telephone settings; network details; voice mail and IrDA activation.Type: ApplicationFiled: April 24, 2008Publication date: September 11, 2008Applicant: VTech Telecommunications Ltd.Inventor: CHRISTOPHER DE BAST
-
Publication number: 20080215331Abstract: A method for speech enabling an application can include the step of specifying a speech input within a speech-enabled markup. The speech-enabled markup can also specify an application operation that is to be executed responsive to the detection of the speech input. After the speech input has been defined within the speech-enabled markup, the application can be instantiated. The specified speech input can then be detected and the application operation can be responsively executed in accordance with the specified speech-enabled markup.Type: ApplicationFiled: February 11, 2008Publication date: September 4, 2008Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles Cross, Leslie Wilson, Steven Woodward
-
Publication number: 20080201639Abstract: The Invention is an apparatus and method for prompting a sequence of timed activities such as a program of exercise. A computer is programmed to allow a user to select an activity or a plurality of activities and the duration of each selected activity. The computer is programmed to select a first prompt and a second prompt. A sequence of activities may be selected by the user and incorporated into a composite audio file, each with its own first and second prompts. The composite audio file may include music selected by the user. The first and second prompts may instruct the user to begin and end the activity while the music provides pacing cues to allow the user to pace him or herself during the period of the activity between the first and second prompts.Type: ApplicationFiled: February 15, 2007Publication date: August 21, 2008Inventor: Jay Shoman
-
Patent number: 7414634Abstract: In a mixer system including a mixer engine wherein contents of signal processing can be programmed and a PC that edits the configuration of signal processing, the PC is provided with a display controller that graphically displays the edited configuration of signal processing using components for the signal processing and wires connecting nodes of the components; an accepting device that accepts designation of a node or wire whose signal is desired to be monitored, in a screen thereof; and a directing device that directs the mixer engine to output the signal from the designated node or wire to a monitoring analog signal output in accordance with the designation, and the mixer engine is provided with an outputting device that outputs a signal to the monitoring analog signal output in accordance with the direction separately from the signal processing relating to the edited signal processing configuration.Type: GrantFiled: February 14, 2005Date of Patent: August 19, 2008Assignee: Yamaha CorporationInventors: Satoshi Takemura, Akihiro Miwa, Makoto Hiroi
-
Publication number: 20080178633Abstract: The present invention relates to an avatar image processing unit applicable to various kind of electronic products for representing information on operation or control of the product with an avatar, and a washing machine having the same applied thereto. The avatar image processing unit of the present invention applied to a home appliance enables to represent a basic direction of use, and states of operation progress and control with mobile characters. The voice presentation related to movement of the avatar enables more accurate transmission of information, which enhances visibility of information to improve information transmission performance, and permits to put emphasis on an entertainment factor of the present consumer trend. A variety of representation is possible by downloading and storing new avatar from a server with an external device, which permits a variety of operation state representations proper for operation conditions, thereby enhancing competitiveness of the product.Type: ApplicationFiled: June 28, 2006Publication date: July 31, 2008Applicant: LG Electronics Inc.Inventors: Seong Hae Jeong, Byung Hwan Ahn