Patents Examined by Leonard Saint-Cyr
  • Patent number: 10685185
    Abstract: Keyword recommendation methods and systems based on a latent Dirichlet allocation (LDA) model. The method comprises: calculating a basic Dirichlet allocation model for training texts; obtaining an incremental seed word, and selecting from the training texts a training text matching the incremental seed word to serve as an incremental training text; calculating an incremental Dirichlet allocation model for the incremental training text; obtaining a probability distribution of complete words to topics and a probability distribution of complete texts to topics; calculating a relevance score between the complete word and any other complete word respectively to obtain a relevance score between every two complete words; and determining, according to an obtained query word and the obtained relevance score between every two complete words, a keyword corresponding to the query word.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: June 16, 2020
    Assignee: Guangzhou Shenma Mobile Information Technology Co., Ltd.
    Inventors: Jingtong Wu, Tianning Li
  • Patent number: 10679012
    Abstract: Disclosed are an apparatus, a system and a non-transitory computer readable medium that implement processing circuitry that receives non-dialog information from a smart device and determines a data type of data in the received non-dialog information. Based on the determined data type, the processing circuitry transforms the received first data using an input from a machine learning algorithm into transformed data. The transformed data is standardized data that is palatable for machine learning algorithms such as those used implemented as chatbots. The standardized transformed data is useful for training multiple different chatbot systems and enables the typically underutilized non-dialog information to be used to as training input to improve context and conversation flow between a chatbot and a user.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: June 9, 2020
    Assignee: Capital One Services, LLC
    Inventors: Alan Salimov, Anish Khazane, Omar Florez Choque
  • Patent number: 10672408
    Abstract: A method for representing a second presentation of audio channels or objects as a data stream, the method comprising the steps of: (a) providing a set of base signals, the base signals representing a first presentation of the audio channels or objects; (b) providing a set of transformation parameters, the transformation parameters intended to transform the first presentation into the second presentation; the transformation parameters further being specified for at least two frequency bands and including a set of multi-tap convolution matrix parameters for at least one of the frequency bands.
    Type: Grant
    Filed: August 23, 2016
    Date of Patent: June 2, 2020
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Dirk Jeroen Breebaart, David Matthew Cooper, Leif Jonas Samuelsson
  • Patent number: 10657962
    Abstract: An information processing system, a computer program product, and methods for modeling multi-party dialog interactions. A method includes learning, directly from data obtained from a multi-party conversational channel, to identify particular multi-party dialog threads as well as participants in one or more conversations. Each participant utterance being converted to a continuous vector representation updated in a model of the multi-party dialog relative to each participant utterance and according to each participant's role selected from the set of: sender, addressee, or observer. The method trains the model to choose a correct addressee and a correct response for each participant utterance, using a joint selection criterion. The method learns directly from the data obtained from the multi-party conversational channel, which dialog turns belong to each particular multi-party dialog thread.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: May 19, 2020
    Assignees: International Business Machines Corporation, University of Michigan
    Inventors: Rui Zhang, Lazaros Polymenakos, Dragomir Radev, David Nahamoo, Honglak Lee
  • Patent number: 10650802
    Abstract: A voice recognition method is provided that includes extracting a first speech from the sound collected with a microphone connected to a voice processing device, and calculating a recognition result for the first speech and the confidence level of the first speech. The method also includes performing a speech for a repetition request based on the calculated confidence level of the first speech, and extracting with the microphone a second speech obtained through the repetition request. The method further includes calculating a recognition result for the second speech and the confidence level of the second speech, and generating a recognition result from the recognition result for the first speech and the recognition result for the second speech, based on the confidence level of the calculated second speech.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: May 12, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yuji Kunitake, Yusaku Ota
  • Patent number: 10650096
    Abstract: Embodiments of the present disclosure disclose a word segmentation method based on artificial intelligence, a server and a storage medium. The word segmentation method may include: acquiring a corpus to be segmented and a segmentation model corresponding to a preset segmentation template; matching the corpus to be segmented with the segmentation model according to a preset matching algorithm, and acquiring a target phrase satisfying a first preset rule in the corpus to be segmented; modifying an emission matrix corresponding to the segmentation model and the corpus to be segmented according to the target phrase; and performing a word segmentation on the corpus to be segmented according to the emission matrix modified, to acquire a first segmentation result.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: May 12, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHONLOGY CO., LTD.
    Inventors: Liqun Zheng, Jinbo Zhan, Qiugen Xiao, Zhihong Fu, Jingzhou He, Guyue Zhou
  • Patent number: 10650095
    Abstract: Understanding emojis in the context of online experiences is described. In at least some embodiments, text input is received and a vector representation of the text input is computed. Based on the vector representation, one or more emojis that correspond to the vector representation of the text input are ascertained and a response is formulated that includes at least one of the one or more emojis. In other embodiments, input from a client machine is received. The input includes at least one emoji. A computed vector representation of the emoji is used to look for vector representations of words or phrases that are close to the computed vector representation of the emoji. At least one of the words or phrases is selected and at least one task is performed using the selected word(s) or phrase(s).
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: May 12, 2020
    Assignee: eBay Inc.
    Inventors: Dishan Gupta, Ajinkya Gorakhnath Kale, Stefan Boyd Schoenmackers, Amit Srivastava
  • Patent number: 10650193
    Abstract: A system and method for lowering probability that low data rates, high bit error rates, and susceptibility to signal degradation in communication environments during storms, solar activity, and adversarial activity is disclosed. The system and method affect communications by minimizing the amount of data/information that needs to be transmitted, partly by creating an algorithmic process/method that moves knowledge and not data and information. The system and method is based upon the realization that structured communications often possess a similar level of context that can be exploited to communicate full meaning (knowledge), even when only a small fraction of the message is transmitted to the receiver. Reducing the number of bytes transmitted significantly reduces the probability that a transmission will be affected by either naturally occurring or human supplied factors present in modern communication environments.
    Type: Grant
    Filed: August 11, 2019
    Date of Patent: May 12, 2020
    Assignee: Bevilacqua Research Corp
    Inventors: Andy Bevilacqua, Roy Brown, Glenn Hembree
  • Patent number: 10643623
    Abstract: An audio signal coding apparatus includes a time-frequency transformer that outputs sub-band spectra from an input signal; a sub-band energy quantizer; a tonality calculator that analyzes tonality of the sub-band spectra; a bit allocator that selects a second sub-band on which quantization is performed by a second quantizer on the basis of the analysis result of the tonality and quantized sub-band energy, and determines a first number of bits to be allocated to a first sub-band on which quantization is performed by a first quantizer; the first quantizer that performs first coding using the first number of bits; the second quantizer that performs coding using a second coding method; and a multiplexer.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: May 5, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Takuya Kawashima, Hiroyuki Ehara
  • Patent number: 10643035
    Abstract: A computer-implemented technique is described for facilitating the creation of a language understanding (LU) component for use with an application. The technique allows a developer to select a subset of parameters from a larger set of parameters. The subset of parameters pertains to a LU scenario to be handled by the application. The larger set of parameters pertains to a plurality of LU scenarios handled by an already-existing generic LU model. The technique creates a constrained LU component that is based on the subset of parameters in conjunction with the generic LU model. At runtime, the constrained LU component interprets input language items using the generic LU model in a manner that is constrained by the subset of parameters that have been selected, to provide an output result. The technique also allows the developer to create new rules and/or supplemental models.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: May 5, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Young-Bum Kim, Ruhi Sarikaya, Alexandre Rochette
  • Patent number: 10635863
    Abstract: Fragment recall and adaptive automated translation are disclosed herein. An example method includes determining that an exact or fuzzy match for a portion of a source input cannot be found in a translation memory, performing fragment recall by matching subsegments in the portion against one or more whole translation units stored in the translation memory, and matching subsegments in the portion against corresponding one or more subsegments inside the one or more matching whole translation units, and returning any of the one or more matching whole translation units and the one or more matching subsegments as a fuzzy match, as well as the translations of those subsegments.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: April 28, 2020
    Assignee: SDL Inc.
    Inventors: Erik de Vrieze, Keith Mills
  • Patent number: 10629184
    Abstract: Cepstral variance normalization is described for audio feature extraction.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: April 21, 2020
    Assignee: Intel Corporation
    Inventors: Tobias Bocklet, Adam Marek
  • Patent number: 10620912
    Abstract: A system, computer program product, and method are provided for use with an intelligent computer platform to convert speech intents to one or more physical actions. The aspect of converting speech intent includes receiving audio, converting the audio to text, parsing the text into segments, identifying a physical action and associated application that are proximal in time to the received audio. A corpus is searched for evidence of a pattern associated with the received audio and corresponding physical action(s) and associated application. An outcome is generated from the evidence. The outcome includes identification of an application and production of an affirmative instruction. The instruction is converted to a user interface trace that is executed within the identified application.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: April 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Maryam Ashoori, Justin D. Weisz
  • Patent number: 10620911
    Abstract: A system, computer program product, and method are provided for use with an intelligent computer platform to convert audio data intents to one or more physical actions. The aspect of converting audio data intent includes receiving audio, converting the audio to text, parsing the text into segments, identifying a physical action and associated application that are proximal in time to the received audio. A corpus is searched for evidence of the text to identify evidence of a physical user interface trace with a select application. An outcome is generated from the evidence. The outcome includes an instruction to invoke a user interface trace with the select application as a representation of the received audio.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: April 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Maryam Ashoori, Justin D. Weisz
  • Patent number: 10621979
    Abstract: A speech recognition method and a mobile terminal relate to the field of electronic and information technologies, and can flexibly perform speech collection and improve a speech recognition rate. The method includes acquiring, by a mobile terminal, an orientation/motion status of the mobile terminal, and determining, according to the orientation/motion status, a voice collection apparatus for voice collection; acquiring, by the mobile terminal, a speech signal from the voice collection apparatus; and recognizing, by the mobile terminal, the speech signal. The present disclosure is applied to a scenario in which the mobile terminal performs speech recognition.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: April 14, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Weidong Tang
  • Patent number: 10607613
    Abstract: An audio signal coding apparatus includes a time-frequency transformer that outputs sub-band spectra from an input signal; a sub-band energy quantizer; a tonality calculator that analyzes tonality of the sub-band spectra; a bit allocator that selects a second sub-band on which quantization is performed by a second quantizer on the basis of the analysis result of the tonality and quantized sub-band energy, and determines a first number of bits to be allocated to a first sub-band on which quantization is performed by a first quantizer; the first quantizer that performs first coding using the first number of bits; the second quantizer that performs coding using a second coding method; and a multiplexer.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: March 31, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Takuya Kawashima, Hiroyuki Ehara
  • Patent number: 10607621
    Abstract: A method for predicting a bandwidth extension frequency band signal includes demultiplexing a received bitstream to obtain a frequency domain signal; determining whether a highest frequency bin, to which a bit is allocated, of the frequency domain signal is less than a preset start frequency bin of a bandwidth extension frequency band; predicting an excitation signal of the bandwidth extension frequency band according to the determination; and predicting the bandwidth extension frequency band signal according to the predicted excitation signal of the bandwidth extension frequency band and a frequency envelope of the bandwidth extension frequency band.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: March 31, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zexin Liu, Lei Miao, Fengyan Qi
  • Patent number: 10600425
    Abstract: A system for converting a channel-based 3D audio signal to a higher-order Ambisonics HOA audio signal, the channel-based 3D audio signal is transformed from time domain to frequency domain. A primary ambient decomposition is carried out for three-channel triplets of blocks of the domain channel-based 3D audio signal, wherein directional signals and ambient signals are provided for each triplet. From the directional signals directional information of a total directional signal for each triple is derived. That total directional signal is HOA encoded according to the derived directions, and ambient signals are HOA encoded according to channel positions. The HOA coefficients of the HOA encoded directional signal and the HOA coefficients of the HOA encoded ambient signal are superimposed in order to obtain a HOA coefficients signal for the channel-based 3D audio signal, followed by a transformation into time domain.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: March 24, 2020
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Johannes Boehm, Xiaoming Chen
  • Patent number: 10599785
    Abstract: A system includes a plurality of sound devices, an electronic device having a serial port emulator configured to generate a serial port emulation corresponding to each of the plurality of sound devices, and a computer-readable storage medium having one or more programming instructions. The system receives compressed and encoded sound input from a first sound device via a serial port emulation associated with the first sound device. The sound input is associated with a first language. The system decodes and decompresses the compressed and encoded sound input to generate decompressed and decoded sound input, generates sound output by translating the decompressed and decoded sound input from the first language to a second language, compresses and encodes the sound output to generate compressed and encoded sound output, and transmits the compressed and encoded sound output to a second sound device via a serial port emulation associated with the second sound device.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: March 24, 2020
    Assignee: WAVERLY LABS INC.
    Inventors: William O. Goethals, Jainam Shah, Benjamin J. Carlson
  • Patent number: 10593318
    Abstract: A system, a computer program product, and method for controlling synthesized speech output on a voice-controlled device. A sensor is used to capture an image of a face of a person. A database of previously stored images of facial features is accessed. In response to i) not recognizing the at least one person the voice-controlled device selects a first set of conversational starters; ii) recognizing the person and recognizing previous communications with the person, the voice-controlled device selects a second set of conversational starters; iii) recognizing the person and not recognizing previous communications with the person, the voice-controlled device selects a third set of conversational starters; or iv) recognizing the at least one person and recognizing previous communications with the person selecting but do not know the person's name selecting a fourth set of conversational starters. The voice controlled device outputs the selected set of conversational starters.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: March 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Shang Qing Guo, Jonathan Lenchner