Patents Examined by Brian L. Albertalli
  • Patent number: 10572585
    Abstract: This disclosure provides a computer-implemented method. The method may include extracting one or more features based on a first utterance from a first interlocutor in a dialog and a second utterance from a second interlocutor in the dialog. The method may further include inferring one or more personality traits of the first interlocutor based on the one or more extracted features from the dialog.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Coporation
    Inventors: En Liang Xu, Chang Hua Sun, Shi Wan Zhao, Ke Ke Cai, Yue Chen, Li Zhang, Zhong Su
  • Patent number: 10572606
    Abstract: Artificial intelligence (AI) technology can be used in combination with composable communication goal statements and an ontology to facilitate a user's ability to quickly structure story outlines in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired communication goal. The determined content can be arranged in a computed story outline that is created at runtime, and NLG can be performed on the computed story outline to generate the narrative story.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: February 25, 2020
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Andrew R. Paley, Nathan Drew Nichols, Matthew Lloyd Trahan, Maia Jane Lewis Meza, Lawrence A. Birnbaum, Kristian J. Hammond
  • Patent number: 10573298
    Abstract: Techniques are described herein for enabling an automated assistant to adjust its behavior depending on a detected age range and/or “vocabulary level” of a user who is engaging with the automated assistant. In various implementations, data indicative of a user's utterance may be used to estimate one or more of the user's age range and/or vocabulary level. The estimated age range/vocabulary level may be used to influence various aspects of a data processing pipeline employed by an automated assistant. In various implementations, aspects of the data processing pipeline that may be influenced by the user's age range/vocabulary level may include one or more of automated assistant invocation, speech-to-text (“STT”) processing, intent matching, intent resolution (or fulfillment), natural language generation, and/or text-to-speech (“TTS”) processing. In some implementations, one or more tolerance thresholds associated with one or more of these aspects, such as grammatical tolerances, vocabularic tolerances, etc.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: February 25, 2020
    Assignee: GOOGLE LLC
    Inventors: Pedro Gonnet Anders, Victor Carbune, Daniel Keysers, Thomas Deselaers, Sandro Feuz
  • Patent number: 10566011
    Abstract: An auto voice trigger method and an audio analyzer employing the same are provided. The auto voice trigger method includes: receiving a signal by at least one resonator microphone included in an array of a plurality of resonator microphones with different frequency bandwidths; analyzing the received signal and determining whether the received signal is a voice signal; and when it is determined that the received signal is the voice signal, waking up a whole system to receive and analyze a wideband signal.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: February 18, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sangha Park, Sungchan Kang, Cheheung Kim, Yongseop Yoon, Choongho Rhee
  • Patent number: 10564928
    Abstract: Systems and methods are provided herein for responding to a voice command at a volume level based on a volume level of the voice command. For example, a media guidance application may detect, through a first voice-operated user device of a plurality of voice-operated user devices, a voice command spoken by a user. The media guidance application may determine a first volume level of the voice command. Based on the volume level of the voice command, the media guidance application may determine that a second voice-operated user device of the plurality of voice-operated user devices is closer to the user than any of the other voice-operated user devices. The media guidance application may generate an audible response, through the second voice-operated user device, at a second volume level that is set based on the first volume level of the voice command.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: February 18, 2020
    Assignee: Rovi Guides, Inc.
    Inventors: Michael McCarty, Glen E. Roe
  • Patent number: 10558689
    Abstract: A method, computer system, and a computer program product for leveraging coherent question sequences is provided. The present invention may include receiving an initiating question. The present invention may include receiving a subsequent question. The present invention may include determining that the received subsequent question is not a rephrasing of the received initiating question. The present invention may also include determining that the received subsequent question is not beginning a new question topic based on determining that the received subsequent question is not a rephrasing of the received initiating question. The present invention may then include propagating a conversational context based on determining that that received subsequent question is not beginning a new question topic. The present invention may include generating and scoring an answer based on the propagated conversational context. The present invention may lastly include outputting the answer.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: February 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Charles E. Beller, Paul J. Chase, Jr., Michael Drzewucki, Edward G. Katz, Christopher Phipps
  • Patent number: 10540970
    Abstract: This disclosure describes, in part, techniques for implementing voice-enabled devices in vehicle environments to facilitate voice interaction with vehicle computing devices. Due to the differing communication capabilities of existing vehicle computing devices, the techniques described herein describe different communication topologies for facilitating voice interaction with the vehicle computing devices. In some examples, the voice-enabled device may be communicatively coupled to a user device, which may communicate with a remote speech-processing system to determine and perform operations responsive to the voice commands, such as conducting phone calls using loudspeakers of the vehicle computing device, streaming music to the vehicle computing device, and so forth.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: January 21, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Dibyendu Nandy, Rangaprabhu Parthasarathy, Snehal G. Joshi, Arvind Mandhani, Dhananjay Motwani, Hans Edward Birch-Jensen, Ambika Pajjuri
  • Patent number: 10540969
    Abstract: A purpose of the present invention is to provide a technique for easily performing accurate voice recognition.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: January 21, 2020
    Assignee: Clarion Co., Ltd.
    Inventors: Takashi Yamaguchi, Yasushi Nagai
  • Patent number: 10535344
    Abstract: Examples of the present disclosure describe systems and methods relating to conversational system user experience. In an example, a conversational system may use one or more sensors of a user device to affect the topic or direction of a conversation or to identify a new conversation topic. The conversational system may also receive input from a user, wherein a GUI may enable the user to specify or alter semantic information used during the conversation. The GUI may comprise one or more skeuomorphic elements designed to provide a familiar or intuitive way for the user to interact with the conversational system. The GUI may also be used to disambiguate messages or convey emotion or sentiment to the user. In another example, haptic or audio feedback may be provided alongside a message to convey emotion to the user during the conversation.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph Edwin Johnson, Jr., Emmanouil Koukoumidis, Donald Brinkman, Hailong Mu, Dustin Abramson, Hudong Wang, Dan Vann, Youssef Hammad
  • Patent number: 10528666
    Abstract: Methods and apparatuses for determining a domain of a sentence are disclosed. The apparatus may generate, using an autoencoder, an embedded feature from an input feature indicating an input sentence, and determine a domain of the input sentence based on a location of the embedded feature in an embedding space where embedded features are distributed.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: January 7, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Yunhong Min
  • Patent number: 10510356
    Abstract: A voice processing method and device, the method comprising: detecting a current voice application scenario in a network (S1); determining the voice quality requirement and the network requirement of the current voice application scenario (S2); based on the voice quality requirement and the network requirement, configuring voice processing parameters corresponding to the voice application scenario (S3); and according to the voice processing parameters, conducting voice processing on the voice signals collected in the voice application scenario (S4).
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: December 17, 2019
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Hong Liu
  • Patent number: 10504518
    Abstract: Systems and processes for accelerating task performance are provided. An example method includes, at an electronic device including a display and one or more input devices, displaying, on the display, a user interface including a suggestion affordance associated with a task, detecting, via the one or more input devices, a first user input corresponding to a selection of the suggestion affordance, in response to detecting the first user input: in accordance with a determination that the task is a task of a first type, performing the task, and in accordance with a determination that the task is a task of a second type different than the first type, displaying a confirmation interface including a confirmation affordance.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 10, 2019
    Assignee: Apple Inc.
    Inventors: Cyrus D. Irani, Oluwatomiwa Alabi, Kellie Albert, Ieyuki Kawashima, Stephen O. Lemay
  • Patent number: 10503833
    Abstract: A device for relation extraction in a natural language sentence having n words is suggested, the device comprising: a recurrent neural network for joint entity and relation extractions of entities and relations between the entities in the sentence, and an entity-relation table for storing entity labels for the entities and relation labels for the relations, wherein both the entity labels and the relation labels are defined as instances of binary relationships between certain words wi and wj in the sentence, with i?[1, . . . , n], and j?[1, . . . , n], wherein each of the entity labels is a first binary relationship for i=j, and wherein each of the relation labels is a second binary relationship for i?j, wherein the recurrent neural network is configured to fill the cells of the entity-relation table and includes a forward neural network and a backward neural network.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: December 10, 2019
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Bernt Andrassy, Pankaj Gupta
  • Patent number: 10489111
    Abstract: The present specification relates to a smart controlling device capable of utilizing machine learning for voice recognition and a method of controlling therefor. The smart controlling device according to the present invention includes a receiver configured to receive an input including a command trigger, and a controller configured to detect one or more external display devices, select a display device of the detected one or more external display devices, cause a power status of the selected display device to be changed to a first state, and cause a response data corresponding to a first command data received after the command trigger to be output on a display of the selected display device.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: November 26, 2019
    Assignee: LG ELECTRONICS INC.
    Inventor: Gyuhyeok Jeong
  • Patent number: 10482896
    Abstract: The present invention relates to a multi-band noise reduction system for digital audio signals producing a noise reduced digital audio output signal from a digital audio signal. The digital audio signal comprises a target signal and a noise signal, i.e. a noisy digital audio signal. The multi-band noise reduction system operates on a plurality of sub-band signals derived from the digital audio signal and comprises a second or adaptive signal-to-noise ratio estimator which is configured for filtering a plurality of first signal-to-noise ratio estimates of the plurality of sub-band signals with respective time-varying low-pass filters to produce respective second signal-to-noise ratio estimates of the plurality of sub-band signals. A low-pass cut-off frequency of each of the time-varying low-pass filters is adaptable in accordance with a first signal-to-noise ratio estimate determined by a first signal-to-noise ratio estimator and/or the second signal-to-noise ratio estimate of the sub-band signal.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: November 19, 2019
    Assignee: Retune DSP ApS
    Inventors: Ulrik Kjems, Thomas Krogh Andersen
  • Patent number: 10475449
    Abstract: Example techniques involve determining a direction of a NMD. An example implementation includes a playback device receiving data representing audio content for playback by the playback device. Before the audio content is played back by the playback device, the playback device detects, in the audio content, one or more wake words for one or more voice services. The playback device causes one or more networked microphone devices to disable its respective wake response to the detected one or more wake words during playback of the audio content by the playback device and plays back the audio content via one or more speakers. When enabled, the wake response of a given networked microphone device to a particular wake word causes the given networked microphone device to listen, via a microphone, for a voice command following the particular wake word.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: November 12, 2019
    Assignee: Sonos, Inc.
    Inventor: Jonathan P. Lang
  • Patent number: 10475441
    Abstract: A voice end-point detection device, a system and a method are provided. The voice end-point detection system includes a processor that is configured to determine an end-point detection time to detect an end-point of speaking of a user that varies vary for each user and for each domain. The voice end-point detection system is configured to perform voice recognition and a database (DB) is configured to store data for the voice recognition by the processor.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: November 12, 2019
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventors: Kyung Chul Lee, Jae Min Joh
  • Patent number: 10460737
    Abstract: Conventional audio compression technologies perform a standardized signal transformation, independent of the type of the content. Multi-channel signals are decomposed into their signal components, subsequently quantized and encoded. This is disadvantageous due to lack of knowledge on the characteristics of scene composition, especially for e.g. multi-channel audio or Higher-Order Ambisonics (HOA) content. A method for decoding an encoded bitstream of multi-channel audio data and associated metadata is provided, including transforming the first Ambisonics format of the multi-channel audio data to a second Ambisonics format representation of the multi-channel audio data, wherein the transforming maps the first Ambisonics format of the multi-channel audio data into the second Ambisonics format representation of the multi-channel audio data.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: October 29, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Oliver Wuebbolt, Johannes Boehm, Peter Jax
  • Patent number: 10460044
    Abstract: A system, computer-readable medium, and a method including receiving a textual representation of a natural language expression for a system requirement; analyzing, by the processor, the textual representation of the natural language expression to determine a natural language object, the natural language object including the textual representation of the natural language expression and syntactic attributes derived therefrom; traversing, by the processor, a grammar graph representation of a modeling language to determine a partial translation of the natural language object, the partial translation including at least one ontology concept placeholder; determining, by the processor, ontology concepts corresponding to the at least one ontology concept placeholder to complete a translation of the textual representation of the natural language expression; and generating a record of the completed translation.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: October 29, 2019
    Assignee: General Electric Company
    Inventors: Emily Cooper LeBlanc, Andrew Crapo
  • Patent number: 10431210
    Abstract: A whole sentence recurrent neural network (RNN) language model (LM) is provided for for estimating a probability of likelihood of each whole sentence processed by natural language processing being correct. A noise contrastive estimation sampler is applied against at least one entire sentence from a corpus of multiple sentences to generate at least one incorrect sentence. The whole sentence RNN LN is trained, using the at least one entire sentence from the corpus and the at least one incorrect sentence, to distinguish the at least one entire sentence as correct. The whole sentence recurrent neural network language model is applied to estimate the probability of likelihood of each whole sentence processed by natural language processing being correct.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: October 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yinghui Huang, Abhinav Sethy, Kartik Audhkhasi, Bhuvana Ramabhadran