Patents by Inventor Jean-Claude Junqua

Jean-Claude Junqua has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110043475
    Abstract: A system and method for identifying a user of a handheld device is herein disclosed. The device implementing the method and system may attempt to identify a user based on signals that are incidental to a user's handling of the device. The signals are generated by a variety of sensors dispersed along the periphery or within the housing. The sensors range may include touch sensors, inertial sensors, acoustic sensors, pulse oximiters, and a touchpad. Based on the sensors and corresponding signals, identification information is generated. The identification information is used to identify the user of the handheld device. The handheld device may implement various statistical learning and data mining techniques to increase the robustness of the system. The device may also authenticate the user based on the user drawing a circle, or other shape.
    Type: Application
    Filed: April 21, 2009
    Publication date: February 24, 2011
    Applicant: PANASONIC CORPORATION
    Inventors: Luca Rigazio, David Kryze, Jean-Claude Junqua
  • Patent number: 7844045
    Abstract: A call routing and supervising system includes an input receiving customer speech from a remote location, and a voice characteristics extractor extracting voice characteristics from the customer speech, such as language/dialect/accent, age group, gender, and eigendimension coordinates. A customer service representative selector selects one or more customer service representatives based on profiles of the customer service representatives respective of customers having voice characteristics similar to the extracted voice characteristics. In other aspects, a call monitor automatically analyzes dialogue between the customer and the customer service representative, such as detected interruptions, tracked dialogue turns, and recognized key phrases indicating frustration, polity, and/or resolution characteristics of dialogue. The call monitor records performance of the customer service representative respective of customers having the voice characteristics.
    Type: Grant
    Filed: June 16, 2004
    Date of Patent: November 30, 2010
    Assignee: Panasonic Corporation
    Inventors: Matteo Contolini, Jean-Claude Junqua
  • Patent number: 7729908
    Abstract: A noise robustness method operates jointly in a signal domain and a model domain. For example, energy is added in the signal domain for frequency bands where an actual noise level of an incoming signal is lower than a noise level used to train models, thus obtaining a compensated signal. Also, energy is added in the model domain for frequency bands where noise level of the incoming signal or the compensated signal is higher than the noise level used to train the models. Moreover, energy is never removed, thereby avoiding problems of higher sensitivity of energy removal to estimation errors.
    Type: Grant
    Filed: March 6, 2006
    Date of Patent: June 1, 2010
    Assignee: Panasonic Corporation
    Inventors: Luca Rigazio, David Kryze, Keiko Morii, Nobuyuki Kunieda, Jean-Claude Junqua
  • Patent number: 7729909
    Abstract: Model compression is combined with model compensation. Model compression is needed in embedded ASR to reduce the size and the computational complexity of compressed models. Model-compensation is used to adapt in real-time to changing noise environments. The present invention allows for the design of smaller ASR engines (memory consumption reduced to up to one-sixth) with reduced impact on recognition accuracy and/or robustness to noises.
    Type: Grant
    Filed: March 6, 2006
    Date of Patent: June 1, 2010
    Assignee: Panasonic Corporation
    Inventors: Luca Rigazio, David Kryze, Keiko Morii, Nobuyuki Kunieda, Jean-Claude Junqua
  • Patent number: 7596499
    Abstract: A multilingual text-to-speech system includes a source datastore of primary source parameters providing information about a speaker of a primary language. A plurality of primary filter parameters provides information about sounds in the primary language. A plurality of secondary filter parameters provides information about sounds in a secondary language. One or more secondary filter parameters is normalized to the primary filter parameters and mapped to a primary source parameter.
    Type: Grant
    Filed: February 2, 2004
    Date of Patent: September 29, 2009
    Assignee: Panasonic Corporation
    Inventors: Xavier Anguera Miro, Peter Veprek, Jean-Claude Junqua
  • Patent number: 7324943
    Abstract: A media capture device has an audio input receptive of user speech relating to a media capture activity in close temporal relation to the media capture activity. A plurality of focused speech recognition lexica respectively relating to media capture activities are stored on the device, and a speech recognizer recognizes the user speech based on a selected one of the focused speech recognition lexica. A media tagger tags captured media with generated speech recognition text, and a media annotator annotates the captured media with a sample of the user speech that is suitable for input to a speech recognizer. Tagging and annotating are based on close temporal relation between receipt of the user speech and capture of the captured media. Annotations may be converted to tags during post processing, employed to edit a lexicon using letter-to-sound rules and spelled word input, or matched directly to speech to retrieve captured media.
    Type: Grant
    Filed: October 2, 2003
    Date of Patent: January 29, 2008
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Luca Rigazio, Robert Boman, Patrick Nguyen, Jean-Claude Junqua
  • Patent number: 7249025
    Abstract: A portable device increases user access to equipment utilizing a communications interface providing communication with the equipment in accordance with various, combinable embodiments. In one embodiment, a speech generator generates speech based on commands relating to equipment operation, which may be received from the equipment via the communications interface. A selection mechanism allows the user to select commands and thereby operate the equipment. In another embodiment, a command navigator navigates commands based on user input by shifting focus between commands, communicates a command having the focus to the speech generator, and allows the user to select a command. In a further embodiment, a phoneticizer converts the commands and/or predetermined navigation and selection options into a dynamic speech lexicon, and a speech recognizer uses the lexicon to recognize a user navigation input and/or user selection of a command.
    Type: Grant
    Filed: May 9, 2003
    Date of Patent: July 24, 2007
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Jean-Claude Junqua, Eugene J. Seagriff
  • Patent number: 7240007
    Abstract: A speaker authentication system includes a data fuser operable to fuse voiceprint match attempt results with additional information to assist in authenticating a speaker providing audio input. In other aspects, the system includes a data store of speaker voiceprints and a voiceprint matching module adapted to receive an audio input and operable to attempt to assist in authenticating a speaker by matching the audio input to at least one of the speaker voiceprints. The voiceprint matching module adjusts a confidence of voiceprint match attempt results by at least one of: (a) a number of utterance repetitions upon which a matching speaker voiceprint has been trained; or (b) a passage of time since a training occurrence associated with a matching speaker voiceprint.
    Type: Grant
    Filed: March 20, 2003
    Date of Patent: July 3, 2007
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Jean-Claude Junqua, Matteo Contolini
  • Patent number: 7124085
    Abstract: A constraint-based speech recognition system for use with a form-filling application employed over a telephone system is disclosed. The system comprises an input signal, wherein the input signal includes both speech input and non-speech input of a type generated by a user via a manually operated device. The system further comprises a constraint module operable to access an information database containing information suitable for use with speech recognition, and to generate candidate information based on the non-speech input and the information database, wherein the candidate information corresponds to a portion of the information. The system further comprises a speech recognition module operable to recognize speech based on the speech input and the candidate information. In an exemplary embodiment, the manually operated device is a touch-tone telephone keypad, and the information database is a lexicon encoded according to classes defined by the keys of the keypad.
    Type: Grant
    Filed: December 13, 2001
    Date of Patent: October 17, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Jean-Claude Junqua, Matteo Contolini
  • Patent number: 7103553
    Abstract: Unstructured voice information from an incoming caller is processed by automatic speech recognition and semantic categorization system to convert the information into structured data that may then be used to access one or more databases to retrieve associated supplemental data. The structured data and associated supplemental data are then made available through a presentation system that provides information to the call center agent and, optionally, to the incoming caller. The system thus allows a call center information processing system to handle unstructured voice input for use by the live agent in handling the incoming call and for storage and retrieval at a later time. The semantic analysis system may be implemented by a global parser or by an information retrieval technique, such as latent semantic analysis. Co-occurrence of keywords may be used to associate prior calls with an incoming call to assist in understanding the purpose of the incoming call.
    Type: Grant
    Filed: June 4, 2003
    Date of Patent: September 5, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Ted Applebaum, Jean-Claude Junqua
  • Patent number: 7096183
    Abstract: A method is provided for customizing the speaking style of a speech synthesizer. The method includes: receiving input text; determining semantic information for the input text; determining a speaking style for rendering the input text based on the semantic information; and customizing the audible speech output of the speech synthesizer based on the identified speaking style.
    Type: Grant
    Filed: February 27, 2002
    Date of Patent: August 22, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventor: Jean-Claude Junqua
  • Patent number: 7089182
    Abstract: A method for performing noise adaptation of a target speech signal input to a speech recognition system, where the target speech signal contains both additive and convolutional noises. The method includes estimating an additive noise bias and a convolutional noise bias; in the target speech signal; and jointly compensating the target speech signal for the additive and convolutional noise biases in a feature domain.
    Type: Grant
    Filed: March 15, 2002
    Date of Patent: August 8, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Younes Souilmi, Luca Rigazio, Patrick Nguyen, Jean-Claude Junqua
  • Patent number: 7069214
    Abstract: A library of mouth shapes is created by separating speaker-dependent and speaker independent variability. Preferably, speaker dependent variability is modeled by a speaker space while the speaker independent variability (i.e. context dependency), is modeled by a set of normalized mouth shapes that need be built only once. Given a small amount of data from a new speaker, it is possible to construct a corresponding mouth shape library by estimating a point in speaker space that maximizes the likelihood of adaptation data and by combining speaker dependent and speaker independent variability. Creation of talking heads is simplified because creation of a library of mouth shapes is enabled with only a few mouth shape instances. To build the speaker space, a context independent mouth shape parametric representation is obtained. Then a supervector containing the set of context-independent mouth shapes is formed for each speaker included in the speaker space.
    Type: Grant
    Filed: March 12, 2002
    Date of Patent: June 27, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventor: Jean-Claude Junqua
  • Patent number: 7064652
    Abstract: An improved method is provided for enrolling with a resource security system. The method includes: providing an access code to a system user; accessing the resource security system using the access code; prompting the user to input a biometric feature which identifies the user; capturing a biometric feature associated with the user; and associating the captured biometric feature with the identity of the user for subsequent verification. The method further includes subsequently granting access to the secured resource based on biometric feature data input by the user.
    Type: Grant
    Filed: September 9, 2002
    Date of Patent: June 20, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Jean-Claude Junqua, Philippe Morin
  • Patent number: 7062339
    Abstract: A control system including a portable device and a server. The portable device includes: (1) a body; (2) a microphone for receiving a first audio data; (3) an audio coder for converting the first audio data to first audio data signals; (4) an optical sensor for reading a first optical data; (5) an optical coder for converting the first optical data to first optical data signals; and (6) a transmitter for transmitting at least the first audio data signals or the first optical data signals to the server.
    Type: Grant
    Filed: May 9, 2002
    Date of Patent: June 13, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: John K. Howard, Dwayne Escola, Jim Pollock, Jean-Claude Junqua
  • Patent number: 6996528
    Abstract: A method and apparatus for data entry by voice under adverse conditions is disclosed. More specifically it provides a way for efficient and robust form filling by voice. A form can typically contain one or several fields that must be filled in. The user communicates to a speech recognition system and word spotting is performed upon the utterance. The spotted words of an utterance form a phrase that can contain field-specific values and/or commands. Recognized values are echoed back to the speaker via a text-to-speech system. Unreliable or unsafe inputs for which the confidence measure is found to be low (e.g. ill-pronounced speech or noises) are rejected by the spotter. Speaker adaptation is furthermore performed transparently to improve speech recognition accuracy. Other input modalities can be additionally supported (e.g. keyboard and touch-screen). The system maintains a dialogue history to enable editing and correction operations on all active fields.
    Type: Grant
    Filed: August 3, 2001
    Date of Patent: February 7, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Philippe R. Morin, Jean-Claude Junqua, Luca Rigazio, Robert C. Boman, Peter Veprek
  • Patent number: 6995668
    Abstract: A wearable, computerized apparatus for use with law enforcement has an evidence collector adapted to collect evidentiary information of a type collected according to law enforcement procedures and useful for identification of a suspect. It further has a safety monitor adapted to collect safety information relating to well-being of an officer. A wireless communications link communicates the evidentiary information and the safety information to a centralized component of a distributed communications system to assist in identifying suspects and dispatching assistance.
    Type: Grant
    Filed: July 7, 2004
    Date of Patent: February 7, 2006
    Assignee: Matsushita Electric Industrial Co.,Ltd.
    Inventor: Jean-Claude Junqua
  • Publication number: 20060009974
    Abstract: Dynamically constructed grammar-constraints and frequency or statistics-based constraints are used to constrain the speech recognizer and to optionally rescore the output to improve recognition accuracy. The recognition system is well adapted for hands-free operation of portable devices, such as for voice dialing operations.
    Type: Application
    Filed: July 9, 2004
    Publication date: January 12, 2006
    Applicant: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.
    Inventors: Jean-Claude Junqua, Luca Rigazio, Jia Lei
  • Patent number: 6983244
    Abstract: A method for improving recognition results of a speech recognizer uses supplementary information to confirm recognition results. A user inputs speech to a speech recognizer. The speech recognizer resides on a mobile device or on a server at a remote location. The speech recognizer determines a recognition result based on the input speech. A confidence measure is calculated for the recognition result. If the confidence measure is below a threshold, the user is prompted for supplementary data. The supplementary data is determined dynamically based on ambiguities between the input speech and the recognition result, wherein the supplementary data will distinguish the input speech over potential incorrect results. The supplementary data may be a subset of alphanumeric characters that comprise the input speech, or other data associated with a desired result, such as an area code or location. The user may provide the supplementary data verbally, or manually using a keypad, touchpad, touchscreen, or stylus pen.
    Type: Grant
    Filed: August 29, 2003
    Date of Patent: January 3, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Jean-Claude Junqua, Roland Kuhn, Matteo Contolini, Rathinavelu Chengalvarayan
  • Publication number: 20050286705
    Abstract: A call routing and supervising system includes an input receiving customer speech from a remote location, and a voice characteristics extractor extracting voice characteristics from the customer speech, such as language/dialect/accent, age group, gender, and eigendimension coordinates. A customer service representative selector selects one or more customer service representatives based on profiles of the customer service representatives respective of customers having voice characteristics similar to the extracted voice characteristics. In other aspects, a call monitor automatically analyzes dialogue between the customer and the customer service representative, such as detected interruptions, tracked dialogue turns, and recognized key phrases indicating frustration, polity, and/or resolution characteristics of dialogue. The call monitor records performance of the customer service representative respective of customers having the voice characteristics.
    Type: Application
    Filed: June 16, 2004
    Publication date: December 29, 2005
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Matteo Contolini, Jean-Claude Junqua