Patents Examined by Daniel Abebe
  • Patent number: 10429823
    Abstract: The invention provides a method for identifying a sequence of events associated with a condition in a process plant using a control system. The method comprises recording process data, which is timestamped, and recording audio input from each personnel of a plurality of personnel of the process plant. The audio input is synchronized according to time with the process data. A keyword is identified from the temporally synchronized content of each audio input, and compared with the process information of one or more of an event and an equipment, for identifying at least one of a new process information and a supplementary process information related to the condition. One or more of the new process information and the supplementary process information identified for each keyword, and the plurality of events identified from the process data, are used for identifying the sequence of events associated with the condition.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: October 1, 2019
    Assignee: ABB Schweiz AG
    Inventors: Jinendra Gugaliya, Naveen Bhutani, Kaushik Ghosh, Nandkishor Kubal, Vinay Kariwala, Wilhelm Wiese
  • Patent number: 10418043
    Abstract: An apparatus and method for encoding and decoding a signal for high frequency bandwidth extension are provided. An encoding apparatus may down-sample a time domain input signal, may core-encode the down-sampled time domain input signal, may transform the core-encoded time domain input signal to a frequency domain input signal, and may perform bandwidth extension encoding using a basic signal of the frequency domain input signal.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: September 17, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki Hyun Choo, Eun Mi Oh, Ho Sang Sung
  • Patent number: 10409915
    Abstract: A method for determining a personality profile of an online user is disclosed. Social speech content data associated with an online user is stored. A machine learning model is used to determine a first personality profile of the online user based at least in part on the social speech content data associated with the online user. A second personality profile of the online user is determined based on the social speech content data using a scientific personality model encoded in an ontology, wherein the ontology encodes statistical relationships between a plurality of words and a plurality of personality traits based on one or more scientific research studies. An ensemble model is applied to determine a third personality profile of the online user based at least in part on the first personality profile and the second personality profile.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: September 10, 2019
    Assignee: Ayzenberg Group, Inc.
    Inventors: John Galen Buckwalter, David Hans Herman, David Ryan Loker, Kai Mildenberger
  • Patent number: 10403279
    Abstract: A system for detecting and capturing voice commands, the system comprising a voice-activity detector (VAD) configured to receive a VAD-received digital-audio signal; determine the amplitude of the VAD-received digital-audio signal; compare the amplitude of the VAD-received digital-audio signal to a first threshold and to a second threshold; withhold a VAD interrupt signal when the amplitude of the VAD-received digital-audio signal does not exceed the first threshold or the second threshold; generate the VAD interrupt signal when the amplitude of the VAD-received digital-audio signal exceeds the first threshold and the second threshold; and perform spectral analysis of the VAD-received digital-audio signal when the amplitude of the VAD-received digital-audio signal is between the first threshold and the second threshold.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: September 3, 2019
    Assignee: Avnera Corporation
    Inventors: Xudong Zhao, Alexander C. Stange, Shawn O'Connor, Ali Hadiashar
  • Patent number: 10403285
    Abstract: The disclosed methods and apparatus allow a lay person to easily and intuitively define virtual scenes using natural language commands and natural gestures. Natural language commands include statements that a person would naturally (e.g., spontaneously, simply, easily, intuitively, etc.) speak without any or little training. Example natural language commands include “put a cat on the box,” or “put a ball in front of the red box.” Natural gestures include gestures that a person would naturally do, perform or carry out (e.g., spontaneously, simply, easily, intuitively, etc.) without any or little training. Example natural gestures include pointing, a distance between hands, gazing, head tilt, kicking, etc. The person can simply speak and gesture how it naturally occurs to them.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: September 3, 2019
    Assignee: Google LLC
    Inventors: Tim Gleason, Jon Bedard, Darwin Yamamoto, Ian MacGillivray, Jason Toff
  • Patent number: 10403280
    Abstract: A lamp device for inputting or outputting a voice signal and a method of driving the same. The method of driving a lamp device includes receiving an audio signal; performing voice recognition of a first audio signal among the received audio signals; generating an activation signal based on the voice recognition result; transmitting the activation signal to the external device; receiving a first control signal from the external device; and transmitting a second audio signal among the received audio signals to the external device in response to the first control signal. Alternatively, various exemplary embodiment may be further included.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: September 3, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yohan Lee, Jungkyun Ryu, Junho Park, Wonsik Song, Seungyong Lee, Youngsu Lee
  • Patent number: 10394957
    Abstract: A software agent, that is used to assist in providing a service, receives communications from a set of users that are attempting to use the software agent. The communications include communications that are interacting with the software agent, and communications that are not interacting with the software agent. The software agent performs natural language processing on all communications to identify such things as user sentiment, user concerns or other items in the content of the messages, and also to identify actions taken by the users in order to obtain a measure of user satisfaction with the software agent. One or more action signals are then generated based upon the identified user satisfaction with the software agent.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: August 27, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Benjamin Gene Cheung, Andres Monroy-Hernandez, Todd Daniel Newman, Mayerber Loureiro De Carvalho Neto, Michael Brian Palmer, Pamela Bhattacharya, Justin Brooks Cranshaw, Charles Yin-Che Lee
  • Patent number: 10395654
    Abstract: Systems and processes for operating an intelligent automated assistant to perform text-to-speech conversion are provided. An example method includes, at an electronic device having one or more processors, receiving a text corpus comprising unstructured natural language text. The method further includes generating a sequence of normalized text based on the received text corpus; and generating a pronunciation sequence representing the sequence of the normalized text. The method further includes causing an audio output to be provided to the user based on the pronunciation sequence. At least one of the sequence of normalized text and the pronunciation sequence is generated based on a data-driven learning network.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: August 27, 2019
    Assignee: Apple Inc.
    Inventors: Ladan Golipour, Matthias Neeracher, Ramya Rasipuram
  • Patent number: 10387517
    Abstract: Methods, systems, and computer readable medium for providing translated web content. A request is received from a user for content in a second language translated from content in a first language from a first Internet source. The content in the first language is obtained and divided into one or more translatable components. Whether the one or more translatable components have been previously translated, via at least one of machine translation, human translation, and a combination thereof, into the second language and stored as translated components in a storage is determined. If there are one or more translatable components previously translated and stored, the content is generated in the second language by modifying the content in the first language so that at least some translatable components are replaced with corresponding translated components and sent in the second language to the user as a response to the request.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: August 20, 2019
    Assignee: MOTIONPOINT CORPORATION
    Inventors: Enrique Travieso, Adam Rubenstein, William Fleming, Charles Whiteman, Eugenio Alvarez, Arcadio Andrade, Collin Birdsey
  • Patent number: 10380998
    Abstract: An improved system and method is disclosed for receiving a spoken or written utterance, identifying and replacing certain words within the utterance with labels to generate a simplified text string representing the utterance, performing intent classification based on the simplified text string, and performing an action based on the intent classification and the original words that were replaced.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: August 13, 2019
    Assignee: Endgame, Inc.
    Inventors: Robert Filar, Richard Seymour, Alexander Kahan
  • Patent number: 10381007
    Abstract: Methods, devices, and systems for processing audio information are disclosed. An exemplary method includes receiving an audio stream. The audio stream may be monitored by a low power integrated circuit. The audio stream may be digitized by the low power integrated circuit. The digitized audio stream may be stored in a memory, wherein storing the digitized audio stream comprises replacing a prior digitized audio stream stored in the memory with the digitized audio stream. The low power integrated circuit may analyze the stored digitized audio stream for recognition of a keyword. The low power integrated circuit may induce a processor to enter an increased power usage state upon recognition of the keyword within the stored digitized audio stream. The stored digitized audio stream may be transmitted to a server for processing. A response received from the server based on the processed audio stream may be rendered.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: August 13, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Eric Liu, Stefan Johannes Walter Marti, Seung Wook Kim
  • Patent number: 10381005
    Abstract: Systems, methods, and vehicle components for determining user frustration are disclosed. A method includes receiving, by a microphone communicatively coupled to a processing device, a voice input from a user, the voice input corresponding to an interaction between the user and a voice recognition system and including indicators of the user frustration. The method further includes determining, by the processing device, that the user is frustrated from the indicators, connecting, by the processing device, the user to a call center operator, transmitting, by the processing device, data to a call center computing device associated with the call center operator, the data corresponding to a user input and/or a vehicle response that resulted in the user frustration, receiving, by the processing device, commands from the call center computing device, and executing, by the processing device, processes that correspond to the commands.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: August 13, 2019
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Scott A. Friedman, Prince R. Remegio, Tim Uwe Falkenmayer, Roger Akira Kyle, Ryoma Kakimi, Luke D. Heide, Nishikant Narayan Puranik
  • Patent number: 10366694
    Abstract: Systems and methods are presented for efficient cross-fading (or other multiple clip processing) of compressed domain information streams on a user or client device, such as a telephone, tablet, computer or MP3 player, or any consumer device with audio playback. Exemplary implementation systems may provide cross-fade between AAC/Enhanced AAC Plus (EAACPlus) information streams or between MP3 information streams or even between information streams of unmatched formats (e.g. AAC to MP3 or MP3 to AAC). Furthermore, these systems are distinguished by the fact that cross-fade is directly applied to the compressed bitstreams so that a single decode operation may be performed on the resulting bitstream. Moreover, using the described methods, similar cross fade in the compressed domain between information streams utilizing other formats of compression, such as, for example, MP2, AC-3, PAC, etc. can also be advantageously implemented.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: July 30, 2019
    Assignee: Sirius XM Radio Inc.
    Inventors: Raymond Lowe, Mark Kalman, Deepen Sinha, Christopher Ward
  • Patent number: 10366685
    Abstract: Example apparatus, articles of manufacture and methods to determine semantic audio information for audio are disclosed. Example methods include extracting a plurality of audio features from the audio, at least one of the plurality of audio features including at least one of a temporal feature, a spectral feature, a harmonic feature, or a rhythmic feature. Example methods also include comparing the plurality of audio features to a plurality of stored audio feature ranges having tags associated therewith. Example methods further include determining a set of ranges of the plurality of stored audio feature ranges having closest matches to the plurality of audio features, a tag associated with the set of ranges having the closest matches to be used to determine the semantic audio information for the audio.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: July 30, 2019
    Assignee: The Nielsen Company (US), LLC
    Inventors: Alan Neuhauser, John Stavropoulos
  • Patent number: 10360883
    Abstract: Example articles of manufacture and apparatus disclosed herein for producing supplemental information for audio signature data obtain the audio signature data of a first time period including data relating to at least one of time or frequency components representing a first characteristic of media. Disclosed examples also obtain first semantic audio signature data, for the first time period, that is a measure of generalized information representing characteristics of the media. Disclosed examples further store the audio signature data of the first time period in association with a second time period when it is determined that second semantic audio signature data for the second time period substantially matches the first semantic audio signature data for the first time period.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: July 23, 2019
    Assignee: The Nielsen Company (US)
    Inventors: Alan Neuhauser, John Stavropoulos
  • Patent number: 10324967
    Abstract: A system for performing semantic search receives an electronic text corpus and separates the text corpus into a plurality of sentences. The system parses and converts each sentence into a sentence tree. The system receives a search query and matches the search query with one or more of the sentence trees.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 18, 2019
    Assignee: Oracle International Corporation
    Inventors: Vladimir Zelevinsky, Yevgeniy Dashevsky, Diana Ye
  • Patent number: 10325611
    Abstract: An audio decoder for providing a decoded audio information on the basis of an encoded audio information includes a linear-prediction-domain decoder configured to provide a first decoded audio information on the basis of an audio frame encoded in a linear prediction domain, a frequency domain decoder configured to provide a second decoded audio information on the basis of an audio frame encoded in a frequency domain, and a transition processor. The transition processor is configured to obtain a zero-input-response of a linear predictive filtering, wherein an initial state of the linear predictive filtering is defined depending on the first decoded audio information and the second decoded audio information, and modify the second decoded audio information depending on the zero-input-response, to obtain a smooth transition between the first and the modified second decoded audio information.
    Type: Grant
    Filed: January 26, 2017
    Date of Patent: June 18, 2019
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Emmanuel Ravelli, Guillaume Fuchs, Sascha Disch, Markus Multrus, Grzegorz Pietrzyk, Benjamin Schubert
  • Patent number: 10325599
    Abstract: Systems and methods for extracting contact information from a message are described. A system can receive a message for a recipient, where the message originates from a message source having a first contact identifier (i.e., phone number, text address, etc.). The system can determine text data associated with the content of that message and process the text data to determine that the message refers to a second contact identifier that is different from the first contact identifier. The system may output the message to a recipient device (such as using text-to-speech, etc.) and may store an association between the message source and the second contact identifier. When the recipient speaks a command to reply to the first message or contact the message source, the system may determine the reply is intended for the message source and may route the reply using the second contact identifier included in the first message.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: June 18, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Pathivada Rajsekhar Naidu, Tony Roy Hardie
  • Patent number: 10319375
    Abstract: Audio data, corresponding to an utterance spoken by a person within a detection range of a voice communications device, can include an audio message portion. The audio data can be captured and analyzed to determine the intent to send a message. Based at least in part upon that intent, a remaining portion of the audio data can be analyzed to determine the intended message target or recipient, as well as the portion corresponding to the actual message payload. Once determined, the audio file can be trimmed to the message payload, and the message payload of the audio data can be delivered as an audio message to the target recipient.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: June 11, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Neil Christopher Fritz, Lakshya Bhagat, Scott Southwood, Katelyn Doran, Brett Lounsbury, Christo Frank Devaraj
  • Patent number: 10311887
    Abstract: A server includes one or more processors, and a computer readable memory coupled to the one or more processors. The computer readable memory contains instructions which when executed by the one or more processors causes the one or more processors to perform the operations of receiving captured audio via an microphone of a mobile or wearable device at a location remote from the server via a communications module operatively coupled to the mobile or wearable device, analyzing the captured audio to provide analyzed captured audio, and sending information to the communications module in response to the analyzed captured audio, the information including instructions to initiate control of media content or initiate operations of the mobile or wearable device. A method operating at the server and other embodiments are disclosed.
    Type: Grant
    Filed: August 11, 2014
    Date of Patent: June 4, 2019
    Assignee: Staton Techiya, LLC
    Inventor: Steven Wayne Goldstein