Patents Examined by Michael N. Opsasnick
  • Patent number: 11031009
    Abstract: Example implementations involve a framework for knowledge base construction of components and problems in short texts. The framework extracts domain-specific components and problems from textual corpora such as service manuals, repair records, and public Q/A forums using: 1) domain-specific syntactic rules leveraging part of speech tagging (POS), and 2) a neural attention-based seq2seq model which tags raw sentences end-to-end identifying components and their associated problems. Once acquired, this knowledge can be leveraged to accelerate the development and deployment of intelligent conversational assistants for various industrial AI scenarios (e.g., repair recommendation, operations, and so on) through better understanding of user utterances. The example implementations give better tagging accuracy on various datasets outperforming well known off-the-shelf systems.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: June 8, 2021
    Assignee: Hitachi, Ltd.
    Inventors: Walid Shalaby, Chetan Gupta, Maria Teresa Gonzalez Diaz, Adriano Arantes
  • Patent number: 11024325
    Abstract: A voice controlled assistant has a housing to hold one or more microphones, one or more speakers, and various computing components. The housing has an elongated cylindrical body extending along a center axis between a base end and a top end. The microphone(s) are mounted in the top end and the speaker(s) are mounted proximal to the base end. A control knob is rotatably mounted to the top end of the housing to rotate about the center axis. A light indicator is arranged on the control knob to exhibit various appearance states to provide visual feedback with respect to the one or more functions being performed by the assistant. In one case, the light indicator is used to uniquely identify participants involved in a call.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: June 1, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Daniel Christopher Bay, Ramy S. Sadek, Menashe Haskin, Jason Zimmer, Robert Ramsey Flenniken, Heinz-Dominik Langhammer
  • Patent number: 11024305
    Abstract: Embodiments described herein include systems and methods for using image searching with voice recognition commands. Embodiments of a method may include providing a user interface via a target application and receiving a user selection of an area on the user interface by a user, the area including a search image. Embodiments may also include receiving an associated voice command and associating, by the computing device, the associated voice command with the search image.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: June 1, 2021
    Assignee: Dolbey & Company, Inc.
    Inventor: Curtis A. Weeks
  • Patent number: 11017775
    Abstract: A method for electronically utilizing content in a communication between a customer and a customer representative is provided. An audible conversation between a customer and a service representative is captured. At least a portion of the audible conversation is converted into computer searchable data. The computer searchable data is analyzed during the audible conversation to identify relevant meta tags previously stored in a data repository or generated during the audible conversation. Each meta tag is associated with the customer. Each meta tag provides a contextual item determined from at least a portion of one of a current or previous conversation with the customer. A meta tag determined to be relevant to the current conversation between the service representative and the customer is displayed in real time to the service representative currently conversing with the customer.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: May 25, 2021
    Assignee: United Services Automobile Association (“USAA”)
    Inventors: Zakery L. Johnson, Jonathan E. Neuse
  • Patent number: 10978045
    Abstract: A foreign language reading and displaying device and a method thereof, a motion learning device based on a foreign language rhythm detection sensor and a motion learning method, includes generating the phonemes corresponding to a syllable of the separated foreign language phonemes into one native language phonemes from among consonants and vowels in accordance with a predetermined pronunciation rules, combining the generated native language phonemes in accordance with a foreign language combination rules to generate and display native language syllables, words, and sentences, and displaying a part of the separated foreign language phonemes not corresponding to a syllable of a foreign language word as a foreign language phoneme according to a predetermined foreign language pronunciation rule; and displaying at least one of the native language sentence and the inputted foreign language sentence on a screen.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: April 13, 2021
    Assignee: MGLISH INC.
    Inventor: Man Hong Lee
  • Patent number: 10964322
    Abstract: A voice interaction tool for voice-assisted application prototypes is described. A visual page of an application prototype is displayed in a design interface. The design interface is controlled to provide an interaction interface to receive a trigger and an associated action for the visual page of the application prototype. The trigger may correspond to one of a voice command, a user gesture, or a time delay, and the action may correspond to one of a speech response, a page transition to an additional visual page of the application prototype, or playback of a media file. User input is received to provide the trigger and the action, and associated interaction data is generated to include the trigger, the action, and the visual page of the application prototype. The associated interaction data is stored to enable testing of the trigger and the action during a testing phase of the application prototype.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: March 30, 2021
    Assignee: Adobe Inc.
    Inventors: Mark C. Webster, Susse Soenderby Jensen, Scott Thomas Werner, Daniel Cameron Cundiff, Blake Allen Clayton Sawyer
  • Patent number: 10963801
    Abstract: Techniques for generating solutions from aural inputs include identifying, with one or more machine learning engines, a plurality of aural signals provided by two or more human speakers, at least some of the plurality of aural signals associated with a human-perceived problem; parsing, with the one or more machine learning engines, the plurality of aural signals to generate a plurality of terms, each of the terms associated with the human-perceived problem; deriving, with the one or more machine learning engines, a plurality of solution sentiments and a plurality of solution constraints from the plurality of terms; generating, with the one or more machine learning engines, at least one solution to the human-perceived problem based on the derived solution sentiments and solution constraints; and presenting the at least one solution of the human-perceived problem to the two or more human speakers through at least one of a graphical interface or an auditory interface.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: March 30, 2021
    Assignee: X Development LLC
    Inventors: Nicholas John Foster, Carsten Schwesig
  • Patent number: 10957329
    Abstract: In one embodiment, a method includes by a client system associated with a user, receiving, at the client system associated with the user, a user input, parsing the user input to identify an n-gram associated with a wake word from a plurality of wake words corresponding to a plurality of assistant systems associated with the client system, wherein each assistant system provides a particular set of functions, determining that the wake word corresponds to a first assistant system of the plurality of assistant systems, wherein the first assistant system provides a first set of functions, sending, to the first assistant system, a request to set an assistant xbot of the first assistant system into a listening mode, and receiving, from the first assistant system, an indication that the assistant xbot is in listening mode responsive to a determination that the user has permission to access the first assistant system.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: March 23, 2021
    Assignee: Facebook, Inc.
    Inventors: Xiaohu Liu, Baiyang Liu, Rajen Subba
  • Patent number: 10950242
    Abstract: Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: March 16, 2021
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10950241
    Abstract: Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: March 16, 2021
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10943581
    Abstract: Systems, methods, and devices for training and testing utterance based frameworks are disclosed. The training and testing can be conducting using synthetic utterance samples in addition to natural utterance samples. The synthetic utterance samples can be generated based on a vector space representation of natural utterances. In one method, a synthetic weight vector associated with a vector space is generated. An average representation of the vector space is added to the synthetic weight vector to form a synthetic feature vector. The synthetic feature vector is used to generate a synthetic voice sample. The synthetic voice sample is provided to the utterance-based framework as at least one of a testing or training sample.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: March 9, 2021
    Assignee: Spotify AB
    Inventor: Daniel Bromand
  • Patent number: 10937423
    Abstract: The present disclosure provides a smart device function guiding method and system, wherein the method comprises: obtaining a user's speech data and obtaining an operation instruction corresponding to the speech data; judging whether the operation instruction complies with a preset guidance condition, and sending a guidance speech to the smart device if the operation instruction complies with the preset guidance condition. The solution of the present disclosure can be employed to improve the efficiency of performing function guidance through speech interaction as compared with the manner of performing the function guidance through the APP or providing simple speech function guidance in the prior art.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: March 2, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Peng Yang
  • Patent number: 10929606
    Abstract: A method for intelligent assistance includes identifying one or more insertion points within an input comprising text for providing additional information. A follow-up expression that includes at least a portion of the input and the additional information at the one or more insertion points is generated for clarifying or supplementing meaning of the input.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: February 23, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Justin C. Martineau, Avik Ray, Hongxia Jin
  • Patent number: 10910000
    Abstract: A method for audio recognition comprises: dividing audio data to be recognized to obtain a plurality of frames of audio data; calculating, based on audio variation trends among the plurality of frames and within each of the plurality of frames, a characteristic value for each frame of the audio data to be recognized; and matching the characteristic value of each frame of the audio data to be recognized with a pre-established audio characteristic value comparison table to obtain a recognition result, wherein the audio characteristic value comparison table is established based on the audio variation trends among the frames and within each of the frames of sample data.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: February 2, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Zhijun Du, Nan Wang
  • Patent number: 10902199
    Abstract: The present disclosure relates generally to systems and methods for analyzing intent. Intents may be analyzed to determine to which device or agent to route a communication. The analyzed intent information can also be used to formulate reports and analyze the accuracy of the identified intents with respect to the received communication.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: January 26, 2021
    Assignee: LIVEPERSON, INC.
    Inventors: Matthew Dunn, Joe Bradley, Laura Onu
  • Patent number: 10902856
    Abstract: Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: January 26, 2021
    Assignee: Verint Systems Ltd.
    Inventors: Omer Ziv, Ran Achituv, Ido Shapira, Jeremie Dreyfuss
  • Patent number: 10891968
    Abstract: An interactive server, a control method thereof, and an interactive system are provided. The interactive server includes: a communicator which communicates with a display apparatus to receive an uttered voice signal; a storage device which stores utterance history information of a second uttered voice signal received from the display apparatus before the first uttered voice signal is received; an extractor which extracts uttered elements from the received first uttered voice signal; and a controller which generates response information based on the utterance history information stored in the storage device and the extracted uttered elements and transmits the response information to the display apparatus. Therefore, the interactive server comprehends intentions of the user with respect to various uttered voices of the user to generate response information according to the intentions and transmits the response information to the display apparatus.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: January 12, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-hye Chung, Cheong-jae Lee, Hye-jeong Lee, Yong-wook Shin
  • Patent number: 10867602
    Abstract: The present disclosure provides a method and an apparatus for waking up via a speech. The method includes: obtaining a speech signal; decoding the speech signal according to a pre-generated searching space to obtain a speech recognition result, in which the searching space includes a path where an inversion model is located, the inversion model includes a first inversion model generated by training based on one or more word segmentation results of each of one or more wake-up phrases; when the first preset number of words of the speech recognition result is obtained, determining whether the preset number of words contains at least part of words in one of the one or more wake-up phrases; and determining cancellation of a wake-up operation directly if does not contain at least part of words in one of the one or more wake-up phrases, and ending the decoding of the speech signal.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: December 15, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Bin Yuan
  • Patent number: 10861468
    Abstract: The apparatus for encoding a multi-channel signal having at least two channels, includes: a parameter determiner for determining a broadband alignment parameter and a plurality of narrowband alignment parameters from the multichannel signal; a signal aligner for aligning the at least two channels using the broadband alignment parameter and the plurality of narrowband alignment parameters to obtain aligned channels; a signal processor for calculating a mid-signal and a side signal using the aligned channels; a signal encoder for encoding the mid-signal to obtain an encoded mid-signal and for encoding the side signal to obtain an encoded side signal; and an output interface for generating an encoded multi-channel signal including the encoded mid-signal, the encoded side signal, information on the broadband alignment parameter and information on the plurality of narrowband alignment parameters.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: December 8, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Stefan Bayer, Eleni Fotopoulou, Markus Multrus, Guillaume Fuchs, Emmanuel Ravelli, Markus Schnell, Stefan Doehla, Wolfgang Jaegers, Martin Dietz, Goran Markovic
  • Patent number: 10854196
    Abstract: Using a method of operating a system that includes remote servers, and at least one electronic device, a user verbally instructs the electronic device to activate a function. The system uses the remote servers and other parts of the system to determine that the function is not currently enabled and requires a user-acknowledgment to terms of use before the function is enabled. The system provides information on the terms of use and solicits a user-acknowledgment to the terms of use. The user provides a verbal acknowledgment of the terms of use and the verbal acknowledgment is received and stored in a persistent data store.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: December 1, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Christopher Geiger Parker, Ilana Rozanes, Sulman Riaz, Vinaya Nadig, Hariharan Srinivasan, Ninad Anil Parkhi, Michael Richard Baglole, Eric Wei Hoong Ow, Tina Sonal Patel