Patents Examined by Martin Lerner
  • Patent number: 11264058
    Abstract: Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: March 1, 2022
    Assignee: Smule, Inc.
    Inventors: Parag P. Chordia, Perry R. Cook, Mark T. Godfrey, Prerna Gupta, Nicholas M. Kruge, Randal J. Leistikow, Alexander M. D. Rae, Ian S. Simon
  • Patent number: 11256871
    Abstract: A method and computer product encoding the method is available for preparing a domain or subdomain specific glossary. The method included using probabilities, word context, common terminology and different terminology to identify domain and subdomain specific language and a related glossary updated according to the method.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: February 22, 2022
    Assignee: VERINT AMERICAS INC.
    Inventors: Christopher J. Jeffs, Ian Beaver
  • Patent number: 11250849
    Abstract: A voice wake-up apparatus used in an electronic device that includes a voice activity detection circuit, a storage circuit and a smart detection circuit is provided. The voice activity detection circuit receives an input sound signal and detects a voice activity section of the input sound signal. The storage circuit stores a predetermined voice sample. The smart detection circuit receives the input sound signal to perform a time domain and a frequency domain detection on the voice activity section to generate a syllable and frequency characteristic detection result, compare the syllable and frequency characteristic detection result with the predetermined voice sample and generate a wake-up signal to a processing circuit of the electronic device when the syllable and frequency characteristic detection result matches the predetermined voice sample to wake up the processing circuit.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: February 15, 2022
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Chi-Te Wang, Wen-Yu Huang
  • Patent number: 11244675
    Abstract: An output-content control device includes a voice classifying unit configured to analyze a voice spoken by a user and acquired by a voice acquiring unit to determine whether the voice is a predetermined voice; an intention analyzing unit configured to analyze the voice acquired by the voice acquiring unit to detect intention information indicating what kind of information is wished to be acquired by the user; a notification-information acquiring unit configured to acquire notification information to be notified to the user based on the intention information; and an output-content generating unit configured to generate an output sentence as sentence data to be output to the user based on the notification information and also configured to generate the output sentence in which at least one word selected among words included in the notification information is replaced with another word when the voice is determined to be the predetermined voice.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: February 8, 2022
    Assignee: JVCKENWOOD Corporation
    Inventor: Tatsumi Naganuma
  • Patent number: 11238864
    Abstract: Generating expanded responses that guide continuance of a human-to computer dialog that is facilitated by a client device and that is between at least one user and an automated assistant. The expanded responses are generated by the automated assistant in response to user interface input provided by the user via the client device, and are caused to be rendered to the user via the client device, as a response, by the automated assistant, to the user interface input of the user. An expanded response is generated based on at least one entity of interest determined based on the user interface input, and is generated to incorporate content related to one or more additional entities that are related to the entity of interest, but that are not explicitly referenced by the user interface input.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Michael Fink, Vladimir Vuskovic, Shimon Or Salant, Deborah Cohen, Asaf Revach, David Kogan, Andrew Callahan, Richard Borovoy, Andrew Richardson, Eran Ofek, Idan Szpektor, Jonathan Berant, Yossi Matias
  • Patent number: 11238508
    Abstract: Systems and methods for improving an information provisioning system using a natural language conversational assistant is provided. A machine agent initiates an interactive natural language conversation with a user to provide the user with guidance on one or more products. The machine agent receives a request for information from the user, and accesses, from a product knowledge database, textual statements about features of the one or more products, whereby the textual statements are obtained by a machine-based offline knowledge extraction process that extracts the textual statements from reviews or product guides. Based on the accessed textual statements and an overall empirical utility of each of the accessed textual statements, the machine agent determines one or more statements of the accessed textual statements to convey to the user. The machine agent causes presentation of the one or more statements to the user.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: February 1, 2022
    Assignee: eBay Inc.
    Inventor: Jean-David Ruvini
  • Patent number: 11238235
    Abstract: From a natural language document using a natural language concept analyzer, a set of natural language input concepts is extracted. Using a query generation model, a query corresponding to the set of natural language input concepts is generated. From a set of natural language results using the natural language concept analyzer, a set of natural language output concepts is extracted, a result in the set of natural language results comprising a portion of narrative text within a natural language corpus, the result identified by searching the natural language corpus using the query. Using the set of natural language input concepts and the set of natural language output concepts, a novelty concept is scored, the scored novelty concept comprising a degree to which a natural language input concept in the set of natural language input concepts is external to a boundary defined by the set of natural language output concepts.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: February 1, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sarbajit K. Rakshit, James E. Bostick, Craig M. Trim, John M. Ganci, Jr., Martin G. Keen
  • Patent number: 11232785
    Abstract: Disclosed are a method and an apparatus for speech recognition. In a method for processing speech recognition according to an embodiment of the disclosure, a relationship of a named entity is extracted and each named entity is clustered based on the extracted relationship of the named entity. An utterance intent is grasped by considering not only information about the named entity itself, but also relationship information of the clustered named entity tagged in the named entity, which may result in improvement of accuracy of speech recognition in the apparatus for speech recognition. A user equipment of the present disclosure can be associated with artificial intelligence modules, drones (unmanned aerial vehicles (UAVs)), robots, augmented reality (AR) devices, virtual reality (VR) devices, devices related to 5G service, etc.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: January 25, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Jiwoo Seo, Jonghoon Chae
  • Patent number: 11232789
    Abstract: The present invention keeps a dialogue continuing for a long time without causing uncomfortable feeling to a user. A dialogue system 10 includes at least an input part 1 that receives a user utterance, which is an utterance from a user and a presentation part 5 that presents the utterance. The input part 1 receives a user utterance performed by the user. A presentation part 5-1 presents a dialogue-establishing utterance which does not include any content words. A presentation part 5-2 presents a second utterance associated with a generation target utterance, which is one or more utterances performed before user utterance that includes at least the user utterance, after the dialogue-establishing utterance.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: January 25, 2022
    Assignees: NIPPON TELEGRAPH AND TELEPHONE CORPORATION, OSAKA UNIVERSITY
    Inventors: Hiroaki Sugiyama, Toyomi Meguro, Junji Yamato, Yuichiro Yoshikawa, Hiroshi Ishiguro, Takamasa Iio, Tsunehiro Arimoto
  • Patent number: 11227612
    Abstract: An audio frame loss recovery method and apparatus are disclosed. In one implementation, data from some but not all audio frames is included in a redundant frame. The audio frames whose data is not included in the redundant frame may include multiple audio frames but may not include more than two consecutive audio frames. Because not all audio frames are used in the redundant frame, the amount of information needed to be transmitted in the redundant frame is reduced. A lost audio frame during transmission may be recovered from either the redundant frame when the redundant frame incudes data of the lost frame, or from at least one neighboring frame of the lost frame derived from either the redundant frame or the successfully transmitted audio frames when the redundant frame does not include data of the lost frame.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: January 18, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Junbin Liang
  • Patent number: 11227127
    Abstract: Embodiments relate to an intelligent computer platform to support a chatbot platform. A semantically enriched document is subjected to natural language processing to generate a cache of tokens, and further classify the tokens, including noun and verb tokens. For each verb token, a corresponding intent is generated, and for each noun token a corresponding entity is generated. A relationship between the generated intents and entities is mapped, and a topology representing the mapped relationship is constructed. A primary verb is identified and assigned as a root node in the topology, and an arrangement of entities related to primary verb are identified and assigned as child nodes related to the root node. The constructed topology is consumed to an AI schema for implementation in the chatbot platform to support real-time communication flow.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: January 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Balaji Sankar Kumar, Vishal George Palliyathu, John Kurian, Ranjith E. Raman, Michael J. Iantosca
  • Patent number: 11217255
    Abstract: Systems and processes for operating an intelligent automated assistant to provide extension of digital assistant services are provided. An example method includes, at an electronic device having one or more processors, receiving, from a first user, a first speech input representing a user request. The method further includes obtaining an identity of the first user; and in accordance with the user identity, providing a representation of the user request to at least one of a second electronic device or a third electronic device. The method further includes receiving, based on a determination of whether the second electronic device or the third electronic device, or both, is to provide the response to the first electronic device, the response to the user request from the second electronic device or the third electronic device. The method further includes providing a representation of the response to the first user.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: January 4, 2022
    Assignee: Apple Inc.
    Inventors: Yoon Kim, Charles Srisuwananukorn, David A. Carson, Thomas R. Gruber, Justin G. Binder
  • Patent number: 11217244
    Abstract: A system including at least one memory, and at least one processor operatively connected to the memory is provided. The memory may store instructions that, when executed, cause the processor to receive an input of selecting at least one domain from a user and store the input in the memory, recognize, at least partially based on data regarding a user utterance received after the input is stored, the utterance, determine, when the utterance does not comprise a domain name, whether or not the utterance corresponds to the selected domain, and generate a response by processing the utterance by using the selected domain when the utterance corresponds to the selected domain.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jisoo Yi, Chunga Han, Marco Paolo Antonio Iacono, Christopher Dean Brigham, Gaurav Bhushan, Mark Gregory Gabel
  • Patent number: 11205046
    Abstract: A method for topic early warning includes: acquiring a self-defined keyword; calculating similarity between the self-defined keyword and each word in a corpus, and acquiring extended keywords related to the self-defined keyword from the corpus according to the similarity; selecting a target keyword from the extended keywords according to a type of the extended keywords and similarity between the extended keywords and the self-defined keyword, and adding the target keyword to a target keyword list; performing real-time monitoring according to the target keyword in the target keyword list; and performing topic early warning when it is monitored that the number of topics corresponding to the target keyword reaches a preset threshold.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: December 21, 2021
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Jianzong Wang, Zhangcheng Huang, Tianbo Wu, Jing Xiao
  • Patent number: 11200901
    Abstract: A method for responding to a voice activated request includes receiving a speech input request from a smart speaker requesting energy management data associated with energy consumption at a premises of the smart speaker. The method also includes generating a voice service request including a first query for a first data source. The first query includes a request for the energy management data. Additionally, the method includes communicating the first query to the first data source and receiving a first response to the first query from the first data source. Further, the method includes generating an audible speech output in response to the speech input request based on the first response to the first query and transmitting the audible speech output to the smart speaker. The smart speaker audibly transmits the audible speech output.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: December 14, 2021
    Assignee: LANDIS+GYR INNOVATIONS, INC.
    Inventors: Keith Mario Torpy, James Randall Turner, David Decker, Ruben E. Salazar Cardozo
  • Patent number: 11195507
    Abstract: Systems and methods are described herein for generating alternate audio for a media stream. The media system receives media that is requested by the user. The media comprises a video and audio. The audio includes words spoken in a first language. The media system stores the received media in a buffer as it is received. The media system separates the audio from the buffered media and determines an emotional state expressed by spoken words of the first language. The media system translates the words spoken in the first language into words spoken in a second language. Using the translated words of the second language, the media system synthesizes speech having the emotional state previously determined. The media system then retrieves the video of the received media from the buffer and synchronizes the synthesized speech with the video to generate the media content in a second language.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: December 7, 2021
    Assignee: Rovi Guides, Inc.
    Inventors: Vijay Kumar, Rajendran Pichaimurthy, Madhusudhan Seetharam
  • Patent number: 11190166
    Abstract: The present invention provides a system and method for representing quasi-periodic (“qp”) waveforms comprising, representing a plurality of limited decompositions of the qp waveform, wherein each decomposition includes a first and second amplitude value and at least one time value. In some embodiments, each of the decompositions is phase adjusted such that the arithmetic sum of the plurality of limited decompositions reconstructs the qp waveform. These decompositions are stored into a data structure having a plurality of attributes. Optionally, these attributes are used to reconstruct the qp waveform, or patterns or features of the qp wave can be determined by using various pattern-recognition techniques. Some embodiments provide a system that uses software, embedded hardware or firmware to carry out the above-described method. Some embodiments use a computer-readable medium to store the data structure and/or instructions to execute the method.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: November 30, 2021
    Assignee: Murata Vios, Inc.
    Inventors: Carlos A. Ricci, Vladimir V. Kovtun
  • Patent number: 11176323
    Abstract: A computer system generates a vector space model based on an ontology of concepts. One or more training examples are extracted for one or more concepts of a hierarchical ontology, wherein the one or more training examples for the one or more concepts are based on neighboring concepts in the hierarchical ontology. A plurality of vectors, each including one or more features, are initialized, wherein each vector corresponds to a concept of the one or more concepts. A vector space model is generated by iteratively modifying one or more vectors of the plurality of vectors to optimize a loss function. Natural language processing is performed using the vector space model. Embodiments of the present invention further include a method and program product for generating a vector space model in substantially the same manner described above.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: November 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Brendan Bull, Paul L. Felt, Andrew G. Hicks
  • Patent number: 11170177
    Abstract: A method is described comprising receiving a conversational transcript of a conversational interaction among a plurality of participants, wherein each participant contributes a sequence of contributions to the conversational interaction. The method includes projecting contributions of the plurality of participants into a semantic space using a natural language vectorization, wherein the semantic space describes semantic relationships among words of the conversational interaction. The method includes computing interaction process measures using information of the conversational transcript, the conversational interaction, and the natural language vectorization.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: November 9, 2021
    Inventors: Nia Marcia Maria Dowell, Tristan Nixon
  • Patent number: 11163963
    Abstract: There is a need for more effective and efficient natural language processing. This need can be addressed by, for example, solutions for performing/executing natural language processing using hybrid document embedding. In one example, a method includes identifying a natural language document associated with one or more document attributes, wherein the natural language document comprises one or more natural language words; determining an attribute-based document embedding for the natural language document, wherein the attribute-based document embedding is generated based on a document vector for the natural language document and a word vector for each natural language word of the one or more natural language words; processing the attribute-based document embedding using a predictive inference model to determine one or more document-related predictions for the natural language document; and performing one or more prediction-based actions based on the one or more document-related predictions.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: November 2, 2021
    Assignee: Optum Technology, Inc.
    Inventors: Shashi Kumar, Suman Roy, Vishal Pathak