Patents Examined by Thuykhanh Le
  • Patent number: 10650813
    Abstract: A computer-implemented method for analyzing content written on a board, on which a text and/or a drawing are made, is disclosed. The method includes obtaining content data including a series of images, which captures content being written on the board. The method also includes obtaining utterance data representing a series of utterances, which is associated with the series of the images. The method further includes extracting a section from the series of the utterances based on a change in topics and recognizing a content block for the section from the content data. The content block includes one or more content parts written during the section. The method includes further calculating evaluation value for the content block by using one or more utterances included in the section.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Tomonori Sugiura, Tomoka Mochizuki, Lianzi Wen, Munehiko Sato
  • Patent number: 10635859
    Abstract: A natural language recognizing apparatus including an input device, a processing device and a storage device is provided. The input device is configured to provide a natural language data. The storage device is configured to store a plurality of program modules. The program modules include a grammar analysis module. The processing device executes the grammar analysis module to analyze the natural language data through a formal grammar model, and generate a plurality of string data. When at least one of the string data conforms to a preset grammar condition, the processing device judges the at least one of the string data is an intention data, and the processing device outputs a corresponding response signal according to the intention data. In addition, a natural language recognizing method is also provided.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: April 28, 2020
    Assignee: VIA Technologies, Inc.
    Inventors: Guo-Feng Zhang, Jing-Jing Guo
  • Patent number: 10621983
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 14, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 10622007
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: April 14, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 10623563
    Abstract: A system and methods is provided for providing SIP based voice transcription services. A computer implemented method includes: transcribing a Session Initiation Protocol (SIP) based conversation between one or more users from voice to text transcription; identifying each of the one or more users that are speaking using a device SIP_ID of the one or more users; marking the identity of the one or more users that are speaking in the text transcription; and providing the text transcription of the speaking user to non-speaking users.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: April 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John R. Dingler, Sri Ramanathan, Matthew A. Terry, Matthew B. Trevathan
  • Patent number: 10606554
    Abstract: A method and apparatus for providing voice command functionality to an interactive whiteboard appliance is provided. An interactive whiteboard appliance comprises: one or more processors; a non-transitory computer-readable medium having instructions embodied thereon, the instructions when executed by the one or more processors cause performance of: detecting, during execution of an annotation window on the interactive whiteboard appliance, a voice input received from a user; storing, in an audio packet, a recording of the voice input; transmitting the audio packet to a speech-to-text service; receiving, from the speech-to-text service, a command string comprising a transcription of the recording of the voice input; using voice mode command processing in a command processor, identifying, from the command string, an executable command that is executable by the interactive whiteboard appliance; causing the application of the interactive whiteboard appliance to execute the executable command.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: March 31, 2020
    Assignee: RICOH COMPANY, LTD.
    Inventors: Rathnakara Malatesha, Lana Wong, Hiroshi Kitada
  • Patent number: 10593328
    Abstract: A system configured to enable remote control to allow a first user to provide assistance to a second user. The system may receive a command from the second user granting remote control to the first user, enabling the first user to initiate a voice command on behalf of the second user. In some examples, the system may enable the remote control by enabling wakeword detection for incoming audio data, enabling a second device to detect a wakeword and corresponding voice command from incoming audio data originating from a first device. For example, the second device may disable and/or modify echo cancellation processing, enabling the second device to detect the voice command from audio output based on the incoming audio data and/or from the incoming audio data itself.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: March 17, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Peng Wang, Pathivada Rajsekhar Naidu
  • Patent number: 10580405
    Abstract: A system configured to enable remote control to allow a first user to provide assistance to a second user. The system may receive a command from the second user granting remote control to the first user, enabling the first user to initiate a voice command on behalf of the second user. In some examples, the system may enable the remote control by treating a voice command originating from the first user as though it originated from the second user instead. For example, the system may receive the voice command from a first device associated with the first user but may route the voice command as though it was received by a second device associated with the second user.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: March 3, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Peng Wang, Pathivada Rajsekhar Naidu
  • Patent number: 10573323
    Abstract: An embodiment of a semiconductor package apparatus may include technology to acquire vibration information corresponding to a speaker, and identify the speaker based on the vibration information. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Jonathan Huang, Hector Cordourier Maruri
  • Patent number: 10573321
    Abstract: Systems and methods for optimizing voice detection via a network microphone device (NMD) based on a selected voice-assistant service (VAS) are disclosed herein. In one example, the NMD detects sound via individual microphones and selects a first VAS to communicate with the NMD. The NMD produces a first sound-data stream based on the detected sound using a spatial processor in a first configuration. Once the NMD determines that a second VAS is to be selected over the first VAS, the spatial processor assumes a second configuration for producing a second sound-data stream based on the detected sound. The second sound-data stream is then transmitted to one or more remote computing devices associated with the second VAS.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: February 25, 2020
    Assignee: Sonos, Inc.
    Inventors: Connor Kristopher Smith, Kurt Thomas Soto, Charles Conor Sleith
  • Patent number: 10566010
    Abstract: Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: February 18, 2020
    Assignee: SPOTIFY AB
    Inventors: Daniel Bromand, David Gustafsson, Richard Mitic, Sarah Mennicken
  • Patent number: 10558421
    Abstract: A computer-implemented method includes identifying a first set of utterances from a plurality of utterances. The plurality of utterances is associated with a conversation and transmitted via a plurality of audio signals. The computer-implemented method further includes mining the first set of utterances for a first context. The computer-implemented method further includes determining that the first context associated with the first set of utterances is not relevant to a second context associated with the conversation. The computer-implemented method further includes dynamically muting, for at least a first period of time, a first audio signal in the plurality of audio signals corresponding to the first set of utterances. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Tamer E. Abuelsaad, Gregory J. Boss, John E. Moore, Jr., Randy A. Rendahl
  • Patent number: 10552118
    Abstract: A computer-implemented method includes identifying a first set of utterances from a plurality of utterances. The plurality of utterances is associated with a conversation and transmitted via a plurality of audio signals. The computer-implemented method further includes mining the first set of utterances for a first context. The computer-implemented method further includes determining that the first context associated with the first set of utterances is not relevant to a second context associated with the conversation. The computer-implemented method further includes dynamically muting, for at least a first period of time, a first audio signal in the plurality of audio signals corresponding to the first set of utterances. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: February 4, 2020
    Assignee: International Busiess Machines Corporation
    Inventors: Tamer E. Abuelsaad, Gregory J. Boss, John E. Moore, Jr., Randy A. Rendahl
  • Patent number: 10540968
    Abstract: Provided is an information processing device including a processing unit acquisition portion that acquires one or more processing units, on the basis of noise, from a first recognition string obtained by performing speech recognition on first input speech, and a processor that processes a processing target, when any one of the one or more processing units is selected as the processing target.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: January 21, 2020
    Assignee: SONY CORPORATION
    Inventors: Shinichi Kawano, Yuhei Taki
  • Patent number: 10535360
    Abstract: A phone stand includes a phone holder for coupling to a phone conducting a voice session, a plurality of directional speakers positioned to project sound to a focused audio area corresponding to a location where a user is expected to be positioned, other speaker(s), and a system controller. The system controller is configured to receive audio signals of the voice session from the phone, separate the audio signals into speech signals and non-speech signals, obtain output mixing attributes, generate mixed signals by combining the speech signals and the non-speech signals according to the output mixing attributes, and send the mixed signals to the plurality of directional speakers. The other speaker(s) can include non-directional speakers, and the system controller is further configured to send the speech signals in the mixed signals to the plurality of directional speakers and the non-speech signals in the mixed signals to the other speaker(s).
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: January 14, 2020
    Assignee: TP Lab, Inc.
    Inventors: Chi Fai Ho, John Chiong
  • Patent number: 10529324
    Abstract: One embodiment provides a method, including: obtaining, using a processor, voice data; obtaining, using a processor, geographic location data; identifying, based on the geographic location data, a language model; and generating, using the language model, a textual representation of the voice data. Other aspects are described and claimed.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: January 7, 2020
    Assignee: COGNISTIC, LLC
    Inventors: Sanjay Chopra, Florian Metze
  • Patent number: 10522149
    Abstract: An information processor requests a recognition result manager to transmit recording information about a call including a keyword and a recognition result of speech recognition using an extension number as a key. The manager transmits the recording information about the call including the keyword corresponding to the extension number and the recognition result of the speech recognition to the processor. The processor displays a recognition result of speech recognition of the call including the keyword on a display unit. Upon receiving an input of an instruction to perform speech playback, the processor transmits recording information in association with text displayed on the display unit to a recorder. The recorder transmits speech data corresponding to the recording information to the processor. The processor plays back speech data corresponding to the recording information.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: December 31, 2019
    Assignee: Hitachi Information & Telecommunication Engineering, Ltd.
    Inventors: Yo Naka, Takashi Sugiyama
  • Patent number: 10504541
    Abstract: There are disclosed devices, system and methods for desired signal spotting in noisy, flawed environments by identifying a signal to be spotted, identifying a target confidence level, and then passing a pool of cabined arrays through a comparator to detect the identified signal, wherein the cabined arrays are derived from respective distinct environments. The arrays may include plural converted samples, each converted sample include a product of a conversion of a respective original sample, the conversion including filtering noise and transforming the original sample from a first form to a second form. Detecting may include measuring a confidence of the presence of the identified signal in each of plural converted samples using correlation of the identified signal to bodies of known matching samples. If the confidence for a given converted sample satisfies the target confidence level, the given sample is flagged.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: December 10, 2019
    Assignee: Invoca, Inc.
    Inventors: Sean Michael Storlie, Victor Jara Borda, Michael Kingsley McCourt, Jr., Leland W. Kirchhoff, Colin Denison Kelley, Nicholas James Burwell
  • Patent number: 10491679
    Abstract: A method of using voice commands from a mobile device to remotely access and control a computer. The method includes receiving audio data from the mobile device at the computer. The audio data is decoded into a command. A software program that the command was provided for is determined. At least one process is executed at the computer in response to the command. Output data is generated at the computer in response to executing at least one process at the computer. The output data is transmitted to the mobile device.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: November 26, 2019
    Assignee: Voice Tech Corporation
    Inventor: Todd R. Smith
  • Patent number: 10475446
    Abstract: A virtual assistant uses context information to supplement natural language or gestural input from a user. Context helps to clarify the user's intent and to reduce the number of candidate interpretations of the user's input, and reduces the need for the user to provide excessive clarification input. Context can include any available information that is usable by the assistant to supplement explicit user input to constrain an information-processing problem and/or to personalize results. Context can be used to constrain solutions during various phases of processing, including, for example, speech recognition, natural language processing, task flow processing, and dialog generation.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: November 12, 2019
    Assignee: Apple Inc.
    Inventors: Thomas R. Gruber, Christopher D. Brigham, Daniel S. Keen, Gregory Novick, Benjamin S. Phipps