Patents Examined by Daniel Abebe
  • Patent number: 10741198
    Abstract: An information processing apparatus includes a memory, and a processor coupled to the memory and configured to specify a first signal level of a first voice signal, specify a second signal level of a second voice signal, and execute evaluation of at least one of the first voice signal and the second voice signal based on at least one of a sum of the first signal level and the second signal level and an average of the first signal level and the second signal level.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: August 11, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Taro Togawa, Sayuri Nakayama, Takeshi Otani
  • Patent number: 10736540
    Abstract: A system for the control of an implant (32) in a body (11), comprising first (10, 20) and second parts (12) which communicate with each other. The first part (10, 20) is adapted for implantation and for control of and communication with the medical implant (32), and the second part (12) is adapted to be worn on the outside of the body (11) in contact with the body and to receive control commands from a user and to transmit them to the first part (10, 20). The body (11) is used as a conductor for communication between the first (10, 20) and the second (12) parts. The second part (12) is adapted to receive and recognize voice control commands from a user and to transform them into signals which are transmitted to the first part (10, 20) via the body (11).
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: August 11, 2020
    Inventor: Peter Forsell
  • Patent number: 10741180
    Abstract: Methods and systems for adding functionality to an account of a language processing system where the functionality is associated with a second account of a first application system is described herein. In a non-limiting embodiment, an individual may log into a first account of a language processing system and log into a second account of a first application system. While logged into both the first account and the second account, a button included within a webpage provided by the first application may be invoked. A request capable of being serviced using the first functionality may be received by the language processing system from a device associated with the first account. The language processing system may send first account data and the second account data to the first application system to facilitate an action associated with the request, thereby enabling the first functionality for the first account.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 11, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ganesh Kumar Gella, Venkata Abhinav Sidharth Bhagavatula, Robert William Serr, Yonnas Getahun Beyene
  • Patent number: 10733994
    Abstract: A dialogue system, a vehicle and a method for controlling the vehicle is provided. The vehicle monitors at least one of vehicle state information and driving environment information, generates a vehicle use pattern based on the monitored information; generates and storing notification event information corresponding to the vehicle use pattern; determines whether the monitored information corresponds to any one of event information; when it is determined that the monitored information corresponds to any one of event information, determines notification timing of the any one of notification event information; when the current time is notification timing of the any one of notification event information, outputs guide information on the notification event information; generates a control command corresponding to the notification event information; and executes a function corresponding the notification event information based on the control command.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: August 4, 2020
    Assignees: Hyundai Motor Company, KIA Motors Corporation
    Inventors: Seona Kim, Donghee Seok, Dongsoo Shin, Jeong-Eom Lee, Ga Hee Kim, Jung Mi Park, HeeJin Ro, Kye Yoon Kim
  • Patent number: 10734003
    Abstract: A linear prediction-based noise signal processing method, includes obtaining a linear prediction coefficient of the noise signal, filtering a signal derived from the noise signal based on the linear prediction coefficient in order to obtain a linear prediction residual signal, obtaining excitation energy of the linear prediction residual signal and a spectral envelope of the linear prediction residual signal, and the spectral envelope, the excitation energy and the linear prediction coefficient are encoded.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: August 4, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Zhe Wang
  • Patent number: 10734008
    Abstract: An apparatus for generating an audio signal envelope from one or more coding values is provided. The apparatus includes an input interface for receiving the one or more coding values, and an envelope generator for generating the audio signal envelope depending on the one or more coding values. The envelope generator is configured to generate an aggregation function depending on the one or more coding values, wherein the aggregation function includes a plurality of aggregation points. Furthermore, the envelope generator is configured to generate the audio signal envelope such that the envelope value of each of the envelope points of the audio signal envelope depends on the aggregation value of at least one aggregation point of the aggregation function.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: August 4, 2020
    Assignee: Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
    Inventors: Tom Baeckstroem, Benjamin Schubert, Markus Multrus, Sascha Disch, Konstantin Schmidt, Grzegorz Pietrzyk
  • Patent number: 10719551
    Abstract: A song determining method and device are provided. According to the embodiment of the present disclosure, by extracting the audio file in the video and acquiring the candidate song identification of the candidate song, to which the segment belongs, in the audio file, the candidate song identification set is obtained; then by acquiring the candidate song file corresponding to the candidate song identification and acquiring a matched audio frame, in which the candidate song file is matched with the audio file, the matched audio frame unit is obtained, wherein the matched audio frame unit includes multiple continuous matched audio frames; the target song identification of the target song, to which the segment belongs, is acquired from the candidate song identification set according to the matched audio frame unit corresponding to the candidate song identification, and the target song, to which the segment belongs, is determined according to the target song identification.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: July 21, 2020
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Weifeng Zhao
  • Patent number: 10720159
    Abstract: Methods and systems for rendering lists of instructions and performing actions associated with those lists are described herein. In some embodiments, an individual may request that a voice activated electronic device associated with their user account assist in performing a task using a list of instructions. The list of instructions may include metadata that indicates actions capable of being performed by additional Internet of Things (“IoT”) devices. When the instructions are rendered, an instructions speechlet may recognize the metadata and may cause one or more of the IoT devices to perform a particular action. Furthermore, the metadata may also correspond to content capable of being rendered by the voice activated electronic device to assist the individual in performing a particular step of the instructions.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: July 21, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Manoj Sindhwani
  • Patent number: 10714097
    Abstract: A frame error concealment (FEC) method is provided. The method includes: selecting an FEC mode based on states of a current frame and a previous frame of the current frame in a time domain signal generated after time-frequency inverse transform processing; and performing corresponding time domain error concealment processing on the current frame based on the selected FEC mode, wherein the current frame is an error frame or the current frame is a normal frame when the previous frame is an error frame.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: July 14, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ho-sang Sung, Nam-suk Lee
  • Patent number: 10713009
    Abstract: A user speech interface for interactive media guidance applications, such as television program guides, guides for audio services, guides for video-on-demand (VOD) services, guides for personal video recorders (PVRs), or other suitable guidance applications is provided. Voice commands may be received from a user and guidance activities may be performed in response to the voice commands.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: July 14, 2020
    Assignee: Rovi Guides, Inc.
    Inventors: M. Scott Reichardt, David M. Berezowski, Michael D. Ellis, Toby DeWeese
  • Patent number: 10706858
    Abstract: There is provided an error concealment unit, method, and computer program for providing an error concealment audio information for concealing a loss of an audio frame in an encoded audio information. In one embodiment, the error concealment unit is configured to provide an error concealment audio information using a frequency domain concealment based on a properly decoded audio frame preceding a lost audio frame. The error concealment unit is configured to fade out a concealed audio frame out according to different damping factors for different frequency bands.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: July 7, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Jérémie Lecomte, Adrian Tomasek
  • Patent number: 10691896
    Abstract: Examples of the present disclosure describe systems and methods relating to conversational system user behavior identification. A user of the conversational system may be evaluated based on one or more factors. The one or more factors may be compared to an aggregated measure for a larger group of conversational system users, such that “anomalous” behavior (e.g., behavior that deviates from a normal behavior) may be identified. When a user is identified as exhibiting anomalous behavior, the conversational system may adapt its interactions with the user in order to encourage, discourage, or further observe the identified behavior. As a result, the conversational system may be able to verify a user's anomalous behavior, discourage the anomalous behavior, or take other action while interacting with the user.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph Edwin Johnson, Jr., Emmanouil Koukoumidis, Donald Brinkman, Matthew Schuerman
  • Patent number: 10685642
    Abstract: An information processing method includes determining information received by an electronic device is information of a first type, converting the first type information to information of a second type, and, in response to receiving a presentation instruction for the second type information, presenting the second type information on the electronic device using a presentation mode corresponding to the second type information.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 16, 2020
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventors: Chen Chen, Xiaoping Zhang
  • Patent number: 10685669
    Abstract: This disclosure describes techniques for identifying a voice-enabled device from a group of voice-enabled devices to respond to a speech utterance of a user. A speech-processing system may receive an audio signal representing the speech utterance captured in an environment of a voice-enabled device, and identify another voice-enabled device located in the environment. The system may analyze the audio signal using a different natural-language-understanding model for each of the voice-enabled devices to identify an intent for each of the voice-enabled devices to respond to the speech utterance. The system may determine confidence scores that the intents are responsive to the speech utterance, and select the intent with the highest confidence score. The system may use the selected intent to generate a command for the corresponding voice-enabled device to respond to the user.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: June 16, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Gang Lan, Joseph Pedro Tavares, Deepak Uttam Shah, Mckay Clawson, Vijay Shankar Tennety, Ravi Kiran Rachakonda, Venkata Snehith Cherukuri, Charles James Torbert
  • Patent number: 10679629
    Abstract: A device can perform device arbitration, even when the device is unable to communicate with a remote system over a wide area network (e.g., the Internet). Upon detecting a wakeword in an utterance, the device can wait a period of time for data to arrive at the device, which, if received, indicates to the device that another speech interface device in the environment detected an utterance. If the device receives data prior to the period of time lapsing, the device can determine the earliest-occurring wakeword based on multiple wakeword occurrence times, and may designate whichever device that detected the wakeword first as the designated device to perform an action with respect to the user speech. To account for differences in sound capture latency between speech interface devices, a pre-calculated time offset value can be applied to wakeword occurrence time(s) during device arbitration.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: June 9, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Stanislaw Ignacy Pasko
  • Patent number: 10679638
    Abstract: The coding efficiency of an audio codec using a controllable—switchable or even adjustable—harmonic filter tool is improved by performing the harmonicity-dependent controlling of this tool using a temporal structure measure in addition to a measure of harmonicity in order to control the harmonic filter tool. In particular, the temporal structure of the audio signal is evaluated in a manner which depends on the pitch. This enables to achieve a situation-adapted control of the harmonic filter tool so that in situations where a control made solely based on the measure of harmonicity would decide against or reduce the usage of this tool, although using the harmonic filter tool would, in that situation, increase the coding efficiency, the harmonic filter tool is applied, while in other situations where the harmonic filter tool may be inefficient or even destructive, the control reduces the appliance of the harmonic filter tool appropriately.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: June 9, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Goran Markovic, Christian Helmrich, Emmanuel Ravelli, Manuel Jander, Stefan Doehla
  • Patent number: 10679000
    Abstract: A method and a system for interpreting conversational authoring of information models. The system includes an understanding module, a managing module, and a generating module. The understanding module is configured to understand a natural language input to interpret an output. The managing module is configured to construct an information model based on the output of the understanding module. The generating module configured is to prompt, as a response to the natural language inputs, wherein the natural language inputs determine concepts and relationships of the concepts. The method includes receiving an interactive dialog between a conversational agent and an information model designer in natural language to produce an information model. The method can further include validating the information model using an information model management system. The method can include interpreting the information model with the use of an application.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: June 9, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Martin Hirzel, Avraham Ever Shinnar, Jerome Simeon
  • Patent number: 10672404
    Abstract: An apparatus for decoding an encoded audio signal to obtain a reconstructed audio signal is provided, having: a receiving interface for receiving one or more frames, a coefficient generator, and a signal reconstructor. The coefficient generator is configured to determine one or more first audio signal coefficients, and one or more noise coefficients. Moreover, the coefficient generator is configured to generate one or more second audio signal coefficients, depending on the one or more first audio signal coefficients and depending on the one or more noise coefficients.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: June 2, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Michael Schnabel, Goran Markovic, Ralph Sperschneider, Jérémie Lecomte, Christian Helmrich
  • Patent number: 10672401
    Abstract: Systems, apparatus and methods are described including operations for a dual mode GMM (Gaussian Mixture Model) scoring accelerator for both speech and video data.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: June 2, 2020
    Assignee: Intel Corporation
    Inventors: Nikhil Pantpratinidhi, Gokcen Cilingir, Michael Deisher, Ohad Falik, Michael Kounavis
  • Patent number: 10665236
    Abstract: Processing stacked data structures is provided. A system receives an input audio signal detected by a sensor of a local computing device, identifies an acoustic signature, and identifies an account corresponding to the signature. The system establishes a session and a profile stack data structure including a first profile layer having policies configured by a third-party device. The system pushes, to the profile stack data structure, a second profile layer retrieved from the account. The system parses the input audio signal to identify a request and a trigger keyword. The system generates, based on the trigger keyword and the second profile layer, a first action data structure compatible with the first profile layer. The system provides the first action data structure for execution. The system disassembles the profile stack data structure to remove the first profile layer or the second profile layer from the profile stack data structure.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: May 26, 2020
    Assignee: Google LLC
    Inventors: Anshul Kothari, Tarun Jain, Gaurav Bhaya, Lisa Takehana, Ruxandra Davies