Patents Examined by Brian Albertalli
  • Patent number: 9953656
    Abstract: An audio decoder for providing at least four audio channel signals on the basis of an encoded representation is configured to provide a first residual signal and a second residual signal on the basis of a jointly encoded representation of the first residual signal and of the second residual signal using a multi-channel decoding. The audio decoder is configured to provide a first audio channel signal and a second audio channel signal on the basis of a first downmix signal and the first residual signal using a residual-signal-assisted multi-channel decoding. The audio decoder is configured to provide a third audio channel signal and a fourth audio channel signal on the basis of a second downmix signal and the second residual signal using a residual-signal-assisted multi-channel decoding. An audio encoder is based on corresponding considerations.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: April 24, 2018
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Dick, Christian Ertel, Christian Helmrich, Johannes Hilpert, Andreas Hoelzer, Achim Kuntz
  • Patent number: 9940938
    Abstract: An audio decoder for providing at least four audio channel signals on the basis of an encoded representation is configured to provide a first residual signal and a second residual signal on the basis of a jointly encoded representation of the first residual signal and of the second residual signal using a multi-channel decoding. The audio decoder is configured to provide a first audio channel signal and a second audio channel signal on the basis of a first downmix signal and the first residual signal using a residual-signal-assisted multi-channel decoding. The audio decoder is configured to provide a third audio channel signal and a fourth audio channel signal on the basis of a second downmix signal and the second residual signal using a residual-signal-assisted multi-channel decoding. An audio encoder is based on corresponding considerations.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: April 10, 2018
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Dick, Christian Ertel, Christian Helmrich, Johannes Hilpert, Andreas Hoelzer, Achim Kuntz
  • Patent number: 9922352
    Abstract: A multidimensional synopsis of a stream of textual data pertaining to a particular subject can be generated. To produce the multidimensional synopsis, multiple dimensions that each includes concepts can be identified. The stream of textual data can then be analyzed to identify the occurrence of the concepts within elements of the stream. The multidimensional synopsis can then be produced by generating a score for each intersecting set of concepts from the multiple dimensions. Therefore, each score can generally represent a prevalence of the corresponding intersecting set of concepts within the stream of textual data.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: March 20, 2018
    Assignee: Quest Software Inc.
    Inventors: Abel Tegegne, Vineetha Abraham, Mitch Brisebois
  • Patent number: 9916307
    Abstract: Dynamic translation of idioms is performed with respect to electronic communications. An electronic communication is observed and movement of indicia proximal to a phrase in the electronic communication is detected. In response to the detection, an idiom search application is activated which identifies an idiom within the phrase and searches a corpus for a translation of the idiom and one or more associated characteristics. In response to detection of the translation in the corpus, profile metadata related to the observed communication is collected and compared to the one or more characteristics. The idiom and the collected profile metadata are stored in a corpus that supports a search of the idiom. In response to absence of the translation in the corpus, the idiom is dynamically translated. The translated idiom is presented proximal to the evaluated expression.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Nadiya Kochura, Alphonse J. Wojtas
  • Patent number: 9881615
    Abstract: A speech recognition apparatus and method. The speech recognition apparatus includes a first recognizer configured to generate a first recognition result of an audio signal, in a first linguistic recognition unit, by using an acoustic model, a second recognizer configured to generate a second recognition result of the audio signal, in a second linguistic recognition unit, by using a language model, and a combiner configured to combine the first recognition result and the second recognition result to generate a final recognition result in the second linguistic recognition unit and to reflect the final recognition result in the language model. The first linguistic recognition unit may be a same linguistic unit type as the second linguistic recognition unit. The first recognizer and the second recognizer are configured in a same neural network and simultaneously/collectively trained in the neural network using audio training data provided to the first recognizer.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: January 30, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hee Youl Choi, Seokjin Hong
  • Patent number: 9881614
    Abstract: The disclosed embodiments illustrate methods and systems for summary generation of a real-time conversation. The method includes receiving a real-time conversation from a plurality of computing devices over a communication network. The method further includes determining one or more first features of the real-time conversation between at least a first user and a second user. The method further includes extracting one or more second features from the one or more first features, based on one or more pre-defined criteria. The method further includes generating a summary content of the real-time conversation, based on at least the extracted one or more second features and one or more annotations associated with the determined one or more first features by use of one or more trained classifier. Further, the method includes rendering the generated summary content on a user interface displayed on at least one of the plurality of computing devices.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: January 30, 2018
    Assignee: CONDUENT BUSINESS SERVICES, LLC
    Inventors: Raghuveer Thirukovalluru, Ragunathan Mariappan, Shourya Roy
  • Patent number: 9875300
    Abstract: Electronic natural language processing in a natural language processing (NLP) system, such as a Question-Answering (QA) system. A receives electronic text input, in question form, and determines a readability level indicator in the question. The readability level indicator includes at least a grammatical error, a slang term, and a misspelling type. The computer determines a readability level for the electronic text input based on the readability level indicator, and retrieves candidate answers based on the readability level.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: January 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Devendra Goyal, Lakshminarayanan Krishnamurthy, Priscilla Santos Moraes, Michael C. Smith
  • Patent number: 9875238
    Abstract: Systems and methods for establishing a language translation setting for a telephony communication determine whether first and second parties to the telephony communication are likely to speak different languages. If so, one or both parties are queried to determine if they would like a language translation to be performed. One or both parties' response to that query is used to establish a language translation setting for the telephony communication. If one or both parties request a translation, some form of real-time translation may then be provided.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: January 23, 2018
    Assignee: VONAGE AMERICA INC.
    Inventors: Yuval Golan, Gil Osher
  • Patent number: 9858936
    Abstract: In some embodiments, a method for selecting at least one layer of a spatially layered, encoded audio signal. Typical embodiments are teleconferencing methods in which at least one of a set of nodes (endpoints, each of which is a telephone system, and optionally also a server) is configured to perform audio coding in response to soundfield audio data to generate spatially layered encoded audio including any of a number of different subsets of a set of layers, the set of layers including at least one monophonic layer, at least one soundfield layer, and optionally also at least one metadata layer comprising metadata indicative of at least one processing operation to be performed on the encoded audio. Other aspects are systems configured (e.g., programmed) to perform any embodiment of the method, and computer readable media which store code for implementing any embodiment of the method or steps thereof.
    Type: Grant
    Filed: September 11, 2013
    Date of Patent: January 2, 2018
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Richard James Cartwright, Glenn Dickins
  • Patent number: 9858258
    Abstract: Automatic locale determination for documents is described. In an embodiment, a computer server receives an electronic document comprising a plurality of unknown-language data elements each associated with one or more types. Based on a document schema of the document, the computer system selects one or more unknown-language data elements from the plurality of unknown-language data elements and assigning to each of the one or more unknown-language data elements a corresponding weight value based on a respective type of the unknown-language data element.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: January 2, 2018
    Assignee: Coupa Software Incorporated
    Inventor: Matthew Pasquini
  • Patent number: 9858336
    Abstract: Electronic natural language processing in a natural language processing (NLP) system, such as a Question-Answering (QA) system. A receives electronic text input, in question form, and determines a readability level indicator in the question. The readability level indicator includes at least a grammatical error, a slang term, and a misspelling type. The computer determines a readability level for the electronic text input based on the readability level indicator, and retrieves candidate answers based on the readability level.
    Type: Grant
    Filed: January 5, 2016
    Date of Patent: January 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Devendra Goyal, Lakshminarayanan Krishnamurthy, Priscilla Santos Moraes, Michael C. Smith
  • Patent number: 9837087
    Abstract: A method for encoding multi-channel HOA audio signals for noise reduction comprises steps of decorrelating the channels using an inverse adaptive DSHT, the inverse adaptive DSHT comprising a rotation operation and an inverse DSHT, with the rotation operation rotating the spatial sampling grid of the iDSHT, perceptually encoding each of the decorrelated channels, encoding rotation information, the rotation information comprising parameters defining said rotation operation, and transmitting or storing the perceptually encoded audio channels and the encoded rotation information.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: December 5, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Johannes Boehm, Sven Kordon, Alexander Krueger, Peter Jax
  • Patent number: 9799331
    Abstract: A feature compensation apparatus includes a feature extractor configured to extract corrupt speech features from a corrupt speech signal with additive noise that consists of two or more frames; a noise estimator configured to estimate noise features based on the extracted corrupt speech features and compensated speech features; a probability calculator configured to calculate a correlation between adjacent frames of the corrupt speech signal; and a speech feature compensator configured to generate compensated speech features by eliminating noise features of the extracted corrupt speech features while taking into consideration the correlation between adjacent frames of the corrupt speech signal and the estimated noise features, and to transmit the generated compensated speech features to the noise estimator.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: October 24, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
  • Patent number: 9792282
    Abstract: A portion of a software product is analyzed to determine a language and a subject-matter domain of the portion. A string is extracted from the portion, where the string has been translated into the language from an original string in an original language in a version of the software product. A corpus including a set of stored strings in the first language is selected. A subset of stored strings is selected from a content that is related to the subject-matter domain of the software product. When the string matches a stored string in the corpus, the string is selected into a shortlist and when a second string extracted from the portion fails to match any stored string in the corpus, the second string is excluded from the shortlist. The shortlist is output, causing a review of an accuracy of a machine translation process to be performed.
    Type: Grant
    Filed: July 11, 2016
    Date of Patent: October 17, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven E. Atkin, Lisa McCabe
  • Patent number: 9786270
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating acoustic models. In some implementations, a first neural network trained as an acoustic model using the connectionist temporal classification algorithm is obtained. Output distributions from the first neural network are obtained for an utterance. A second neural network is trained as an acoustic model using the output distributions produced by the first neural network as output targets for the second neural network. An automated speech recognizer configured to use the trained second neural network is provided.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: October 10, 2017
    Assignee: Google Inc.
    Inventors: Andrew W. Senior, Hasim Sak, Kanury Kanishka Rao
  • Patent number: 9779761
    Abstract: Arrangements described herein relate to receiving, in real time, utterances spoken or sung by a first person when the utterances are spoken or sung and comparing, in real time, the detected utterances spoken or sung by the first person to at least a stored sample of utterances spoken or sung by the first person. Based, at least in part, on the comparing the detected utterances spoken or sung by the first person to at least the stored sample of utterances spoken or sung by the first person, a key indicator that indicates at least one characteristic of the detected utterances spoken or sung by the first person can be generated. Feedback indicating the at least one characteristic of the detected utterances spoken or sung by the first person can be communicated to the first person or a second person.
    Type: Grant
    Filed: April 21, 2016
    Date of Patent: October 3, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alan D. Emery, Aditya Sood, Mathews Thomas, Janki Y. Vora
  • Patent number: 9779727
    Abstract: The claimed subject matter includes a system and method for recognizing mixed speech from a source. The method includes training a first neural network to recognize the speech signal spoken by the speaker with a higher level of a speech characteristic from a mixed speech sample. The method also includes training a second neural network to recognize the speech signal spoken by the speaker with a lower level of the speech characteristic from the mixed speech sample. Additionally, the method includes decoding the mixed speech sample with the first neural network and the second neural network by optimizing the joint likelihood of observing the two speech signals considering the probability that a specific frame is a switching point of the speech characteristic.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: October 3, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dong Yu, Chao Weng, Michael L. Seltzer, James Droppo
  • Patent number: 9779723
    Abstract: A voice recognition system for a vehicle includes a micro-phone for receiving speech from a user. The system further includes a memory having a partial set of commands or names for voice recognition. The memory further includes a larger set of commands or names for voice recognition. The system further includes processing electronics in communication with the microphone and the memory. The processing electronics are configured to process the received speech to obtain speech data. The processing electronics are further configured to use the obtained speech data to conduct at least two voice recognition passes. In a first pass, the speech data is compared to the partial set. In a second pass, the speech data is compared to the larger set.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: October 3, 2017
    Assignee: Visteon Global Technologies, Inc.
    Inventors: Sorin M. Panainte, David J. Hughes
  • Patent number: 9772993
    Abstract: A system and method of recording utterances for building Named Entity Recognition (“NER”) models, which are used to build dialog systems in which a computer listens and responds to human voice dialog. Utterances to be uttered may be provided to users through their mobile devices, which may record the user uttering (e.g., verbalizing, speaking, etc.) the utterances and upload the recording to a computer for processing. The use of the user's mobile device, which is programmed with an utterance collection application (e.g., configured as a mobile app), facilitates the use of crowd-sourcing human intelligence tasking for widespread collection of utterances from a population of users. As such, obtaining large datasets for building NER models may be facilitated by the system and method disclosed herein.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: September 26, 2017
    Assignee: VoiceBox Technologies Corporation
    Inventors: Daniela Braga, Spencer John Rothwell, Faraz Romani, Ahmad Khamis Elshenawy, Stephen Steele Carter, Michael Kennewick
  • Patent number: 9767193
    Abstract: A generation apparatus that generates a contracted sentence in which one part of a plurality of words included in a sentence is removed, the generation apparatus includes a memory configured to store a first index for determining whether two words are left as a pair in the contracted sentence, for each characteristic between the two words being connected to each other in the sentence through a grammatical or conceptual relation, and a processor coupled to the memory and configured to generate the contracted sentence by removing the one part of the plurality of words based on the first index corresponding to every pair of two words connected to each other with the grammatical or conceptual relation, and output the contracted sentence.
    Type: Grant
    Filed: March 14, 2016
    Date of Patent: September 19, 2017
    Assignee: FUJITSU LIMITED
    Inventor: Nobuyuki Katae